TALKING POINT
THE GULF’ S AI AMBITION DEMANDS A SECURITY MODEL BUILT FOR THE AI ERA
As AI becomes embedded across critical infrastructure and national strategies in the Gulf, organisations must rethink how they secure increasingly complex and dynamic environments. Diego Arrabal, Vice President, Eastern Europe, Middle East and Africa, Check Point Software Technologies, tells us why a prevention-first, unified security approach is essential to building trusted, scalable AI systems that can support longterm innovation and resilience.
Across the Gulf, AI is no longer an experiment or a side project. It is becoming core infrastructure. Governments, enterprises and critical sectors are embedding AI into how they plan, operate and compete, linking its adoption directly to national development goals, economic diversification and longterm resilience.
That momentum is real, and it matters. But as AI moves from pilots into production, the conversation has to change. The question is no longer whether AI can create value, but whether organisations are building it in a way they can actually trust.
For years, cybersecurity was often treated as something that could be added later, once systems were already in place. That approach does not survive in the AI era. AI changes how systems behave, how data moves and how decisions are made. Security cannot sit on the perimeter anymore. It has to be built into the foundation.
Attackers have already adapted. AI is being used to scale attacks, automate reconnaissance and make social engineering more convincing than ever. At the same time, organisations are rolling out copilots, AI driven applications, autonomous agents and private AI environments at speed. The result is a far broader and more dynamic attack surface than most traditional security models were designed to handle.
This challenge is particularly acute in the Gulf, where AI adoption is increasingly tied to critical sectors and strategic priorities. Financial services, healthcare, energy, logistics, government platforms and industrial systems are all becoming more intelligent and more data driven. Security therefore cannot remain limited to networks, endpoints or email. It must extend into models, prompts, agents, permissions, data pipelines and real time behaviour inside AI environments.
This is why the concept of the AI factory is gaining traction across the region. Many organisations are no longer comfortable relying entirely on public AI services without clear visibility into where their data resides, how it is handled or who ultimately controls it. Regulatory expectations, sovereignty
Diego Arrabal, Vice President, Eastern Europe, Middle East and Africa, Check Point Software Technologies
requirements and business risk are accelerating the move toward private and hybrid AI environments. These environments are quickly becoming mission critical infrastructure.
But AI factories do not behave like traditional data centres. They combine high performance compute, massive datasets, distributed training pipelines, inference engines, APIs, orchestration layers and increasingly autonomous systems that can act, not just generate output. The risks are different as well: prompt injection, model theft, data leakage, adversarial manipulation and lateral movement across AI workloads are no longer theoretical concerns.
Partial visibility is not enough in this context. Securing AI requires understanding how models, agents and applications behave at runtime, not just how they were designed. Control has to exist where decisions are made and actions are triggered. •
18
INTELLIGENT CIO MIDDLE EAST www. intelligentcio. com