DISRUPTIVE TECH
So, what challenges remain to be solved?
Making data AI-ready and the LLM subsequently accesses this metadata to generate queries, in languages such as SQL to retrieve the correct data from the original sources.
For many organisations, having AI-ready information continues to prove complex. Technologies such as retrieval augmented generation, RAG, however, provide a comprehensive solution to this problem. Yet, most RAG implementations involve aggregating all relevant data and storing it in vector format, an approach that can be costly and time-consuming, and that only provides updated answers from the last data upload.
Instead, the approach to follow is known as Query RAG; metadata, not the data itself, is vectorised
This ensures accurate and up-to-date answers in real time, without significant overhead of data replication and vectorisation. In the Middle East, where data residency regulations often require data to remain incountry, Query RAG can also help reduce unnecessary duplication while maintaining compliance.
Response time and accuracy
As the size of metadata sets increases, reasoning models begin to perform better and provide more accurate and relevant answers.
Florian Douetteau, Co-Founder and
CEO, Dataiku
However, query generation times lengthen in a directly proportional manner. This is not an inherent limitation of models such as DeepSeek R1, but of the application of the Query RAG approach to any reasoning model.
It is therefore crucial for companies to find the right trade-off between latency and accuracy for their specific use cases.
CEOs struggle with inevitability of AI
74 % of CEOs internationally admit they are at risk of losing their job within two years if they fail to deliver measurable AI-driven business gains, according to the newly released Global AI Confessions Report: CEO Edition by Dataiku. The study, conducted by The Harris Poll for Dataiku, exposes the candid admissions and revelations of global chief executives as they face a new reality: AI strategy has become the defining factor in corporate survival.
The findings underscore an unprecedented shift in executive accountability, as 70 % of CEOs predict that by the end of the year, at least one of their peers will be ousted due to a failed AI strategy or AI-induced crisis. Meanwhile, more than half of CEOs, 54 % admit that a competitor has already deployed a superior AI strategy, highlighting the urgency for organisations to move beyond AI ambition into tangible execution.
94 % of CEOs admit that an AI agent could provide equal or better counsel on business decisions than a human board member. 89 % of CEOs believe AI can develop an equal or better strategic plan than one or more of their executive leaders, a cohort defined as VP to C-suite. As AI’ s influence expands, it is not just reshaping strategy – it is challenging the very foundation of corporate leadership, forcing CEOs to reconsider who, or what, will make the most critical decisions in the future.
Despite their growing reliance on AI, many CEOs remain dangerously unaware of the pitfalls of poorly executed AI strategies. 87 % of CEOs fall into the AI commodity trap, expressing confidence that off-the-shelf AI agents can be just as effective as custom-built solutions for highly nuanced vertical or domainspecific business applications. 35 % of AI initiatives are suspected to be AI washing – designed more for optics than real business impact.
Ensure security and privacy
Many organisations make no secret of their misgivings about using DeepSeek client applications because they connect to servers operated by the same or other Chinese entities-for example, US government agencies have banned their use. Similarly, some Middle Eastern enterprises must also comply with strict data sovereignty mandates and cybersecurity regulations.
One strategy to overcome this critical issue may be not to use client applications, but to fully reproduce the R1 LLM within one’ s own protected execution environments. In general, regardless of the choice of model, it is critical to ensure that the execution environment of an LLM meets security and privacy compliance requirements, as indeed is necessary with any type of SaaS software or service that accesses sensitive data or metadata.
As Open-Source AI continues to evolve, organisations in the Middle East must navigate a delicate balance: embracing innovation while ensuring compliance, efficiency, and security.
With a focus on intelligent data management and scalable governance, Denodo believes enterprises can turn these emerging challenges into long-term competitive advantage. p
74 INTELLIGENTCIO MIDDLE EAST www. intelligentcio. com