Intelligent CIO Middle East Issue 117 | Page 18

EXPERT COLOUMN

SINDHU KASHYAP SENIOR CONTENT STRATEGIST,
MIDDLE EAST & AFRICA

AGENTIC AI IS GROWING UP, BUT WHO’ S ACCOUNTABLE WHEN IT MISBEHAVES?

As AI systems become increasingly autonomous, a familiar question is resurfacing with greater urgency: who’ s actually in control?

Tension increases between AI deployment and human accountability in boardrooms and engineering teams. As autonomous AI transforms business, developers caution: autonomy isn’ t abdication.

“ It surprised me how many senior leaders didn’ t even know what prompting was,” said Jessica Constantinidis, Innovation Officer for EMEA at ServiceNow.“ These are CISOs and CIOs, the key decision-makers in AI, yet many lack a basic understanding of how these tools work. Some even admitted they were too embarrassed to ask.”
For Beswick, governance isn’ t about bureaucracy; it’ s about feedback loops.“ If an agent replaces 70 % of a job, part of the new role should be monitoring that agent. We’ re not just automating tasks – we’ re reshaping roles. That’ s a cultural shift, not just a tool change.”
AI can still falter even in narrowly defined use cases. Sascha Giese, Global Technical Evangelist at SolarWinds, notes that AI is prone to hallucinations, with training data drifting and models responding confidently with nonsense – like search engines recommending bleach in recipes. Ongoing validation is crucial.
Giese believed the best analogy for AI today is a new hire.“ They might be brilliant, but they still need onboarding. They need rules. You wouldn’ t give a fresh recruit unrestricted access to your systems on day one – why would you do that with AI?”
That lack of fluency affects more than just terminology; it impacts how organisations deploy AI and their readiness for issues. Constantinidis begins every briefing with a reminder:“ Curiosity isn’ t optional anymore. If you’ re not experimenting, questioning and learning, you’ re already behind.”
Curiosity alone can’ t ensure safe systems. David Tait, AI Lead at DXC Technology, calls agentic AI‘ the new operating system for business’ but warns it brings‘ a new class of risk’. He advocated layered governance, organisational oversight for policy and ethics, and operational governance embedded in systems.
“ Having a document that says‘ this is our AI policy’ isn’ t enough,” he said.“ You need to translate that into code, permissions and workflows. Otherwise, you’ re just hoping the machine behaves.”
At ServiceNow, Constantinidis has observed companies implementing AI across departments like IT, HR and finance. However, she warns that simply digitizing old workflows misses the point; true innovation requires redesigning processes alongside new technology.
One popular solution: rollback buttons. If an AIautomated process goes wrong – such as a new hire not showing up – organisations can quickly revoke access, cancel provisioning and trace every step taken.“ CISOs love it,” she said,“ because the worst thing is automating a process you can’ t undo.”
Accountability must be concrete.“ If an AI system makes a decision, there must be a clear chain of responsibility to a named person or business unit. You can’ t just say‘ the AI did it.’ Someone built and approved it. That matters,” said Tait.
Jacob Beswick, Director of AI Governance Solutions at Dataiku, takes a similarly pragmatic view.“ It all comes down to intentional design,” he said.“ You don’ t start by asking whether an agent can be autonomous. You ask: What’ s the goal? What are the consequences if it gets it wrong? Who reviews the output?”
Experts agree AI isn’ t a runaway train, but it could become one without vigilance. The goal isn’ t to slow innovation but to couple it with curiosity, traceability, and humility to pause when necessary.
“ Failure is inevitable,” says Constantinidis.“ But what matters is how quickly you recognise it – and how ready you are to respond.” p
18 INTELLIGENTCIO MIDDLE EAST www. intelligentcio. com