Intelligent CIO Middle East Issue 123 | Page 26

FEATURE
Several participants highlighted that organisations often want AI decisionmaking before they can even guarantee a single source of truth.
In practice,‘ trustworthy AI’ starts by doing unglamorous work such as defining what data matters, standardising metadata, building reliable pipelines and ensuring data is current. Only then do AI agents become anything more than clever guessers.
Regulated industries
The most complex challenge appeared in regulated industries, where‘ move fast and break things’ is not a philosophy, it’ s a liability. For example, healthcare data is not one category but many layers, with additional sensitivity in mental health, biotech and genomics that many companies actively want.
The roundtable returned to issues that are often overlooked in corporate AI hype:
• What data can be used and in what form?
• Where do models and platforms operate geographically?
• Do you truly know whether an AI product is genuinely available, where it is hosted and what it is actually doing under the hood?
• Can token-based AI pricing ever sit comfortably within procurement frameworks designed for fixed costs, annual contracts and regulatory certainty
In other words, regulated environments can experiment, but only through deliberate frameworks and close coordination with information security and legal teams.
The human loop as a design choice
A striking feature of the discussion was the rejection of‘ full automation’ as the default goal. Human-in-the-loop was repeatedly framed as sensible engineering, not hesitation. It is a structure: clear decision boundaries, escalation paths, audit trails and accountability
In procurement, for example, smaller purchases could be automated, while higher-value decisions remained human-led. The same logic appeared in the real-world example of autonomous vehicles where edge cases still benefit from human supervision. Trust is built by designing for what happens
26
INTELLIGENT CIO MIDDLE EAST www. intelligentcio. com