INTELLIGENT BRANDS // Enterprise Security
SANS Institute prepares upcoming release of Critical AI Security Guidelines v1.0
Organisations that integrate Artificial Intelligence into their workforce and offerings are accelerating innovation, but many are unprepared for the security challenges that come with it. As they rush to deploy more efficient and costeffective models, they often overlook the risks of model manipulation and adversarial attacks, threats that traditional defences are not equipped to detect or stop.
At the same time, many leaders are still grappling with how to safely and securely operationalise AI across their environments. As AI becomes deeply embedded in both business operations and critical infrastructure, the risks are expanding rapidly and at a global scale.
To help organisations navigate these risks and assist them in taking back control, the SANS Institute is launching a major initiative – the upcoming release of its Critical AI Security Guidelines v1.0, a practical, operations-driven framework built for defenders and leaders who need to secure AI systems now.
Left to right: Rob T Lee, Chief of Research and Co-Chair, SANS AI Summit and Kate Marshall, SANS AI Hackathon Director and Co-Chair, SANS AI Summit
researchers, and industry leaders to contribute insights and updates as threats evolve and new best practices emerge.
“ We are seeing organisations deploy large language models, retrieval-augmented generation, and autonomous agents faster than they can secure them,” said Rob T Lee, Chief of Research and Co-Chair of the SANS AI Summit.
SANS AI CYBERSECURITY HACKATHON INVITED CYBERSECURITY COMMUNITY TO DESIGN OPEN-SOURCE TOOLS ALIGNED WITH THE NEW SECURITY GUIDELINES.
The guidelines will debut at the SANS AI Summit 2025 and focus on six critical areas: Access Controls, Data Protection, Deployment Strategies, Inference Security, Monitoring, and Governance, Risk and Compliance.
These guidelines are designed to provide security teams and leadership with clear, practical direction for defending AI systems in real-world environments. Each section provides actionable recommendations to help organisations identify, mitigate, and manage the risks associated with modern AI technologies.
Once released, the guidelines will be open to community feedback, allowing practitioners,
“ These guidelines are built for where the field is now. They are not theoretical; they are written for analysts and leaders in the trenches, who need to protect these systems starting today.”
As AI technologies become central to every aspect of business operations, the need for open-source tools to augment security teams and new capabilities to help secure AI has never been greater. To address this, the SANS AI Cybersecurity Hackathon invited the cybersecurity community to design opensource tools directly aligned with the new security guidelines.
This unique event challenged participants to develop innovative solutions for protecting
AI models, monitoring inference processes, defending against adversarial attacks, and addressing other vulnerabilities unique to AI systems. The tools produced during the hackathon will be showcased at the AI Summit, providing tangible, real-world solutions for organisations.
“ We need more people who understand how AI works under the hood and how to defend it,” said Kate Marshall, SANS AI Hackathon Director and Co-Chair of the SANS AI Summit.“ The hackathon is already making a difference. It is not just creating tools; it is showcasing talent, and that is exactly what we need to secure AI systems for the future.” p
www. intelligentcio. com INTELLIGENTCIO MIDDLE EAST 67