TRENDING projections. According to one estimate from PwC’ s Strategy &, the GCC could generate almost US $ 10 for every dollar invested in Generative AI.
Again, if we look behind the figures, we see potential in LLMs’ ability to connect humans to large data sets that would otherwise be out of reach for non-technical people. But we also see an eagerness to trust because of the powerful results.
If we take a moment to consider a highly regulated industry, we find age-old business processes and workflows defined around consistency and privacy.
However, if we take a moment to consider a highly regulated industry like BFSI, we find age-old business processes and workflows defined around consistency and privacy. When audited, a bank must explain exactly what it does with data during each of its workflows. lead to more positive outcomes. Governance based on transparency will be critical in tightly regulated sectors, with each transaction logged, tagged, and made visible.
Division of AI labour
LLMs offer a marvellous interface between raw data, and the rapid insights they can provide are tempting to BFSIs. But when answering the question of what happened to the data during the process, things start to become vague. And trust can take a hit.
Regulations
And when trust takes a hit, there is a knock-on effect of decreased innovation and even a slowing of economic growth. As such, there is increasing pressure to get the balance right between total trust and the placement of guardrails. Far from being an impediment to highly regulated industries, AI regulation is an attempt to remove it from its black box and make it accountable.
One place this is happening right now is in Europe. The EU Artificial Intelligence Act became law in August 2024. The framework is described by the European Commission as being based around human rights and fundamental values and intended as groundwork for development of an AI ecosystem that benefits everyone.
The law goes as far as to break risk down into four distinct categories and covers every AI use case from minimal risk spam filters through specific transparency risk chatbots to the unacceptable risk of controversial social-scoring models, which is now banned.
Governance and transparency are better achieved if AI agents are made responsible for smaller tasks that are combined through orchestration to deliver insights or automate processes. Supplemented by human oversight, such an approach will help eliminate the black box.
Dynamic scaling
Processes can be accelerated by implementing multiple channels to move complex and resourceheavy tasks through the workflow more efficiently.
Trust may be fundamental to our economies, but that does not mean we should automatically trust something just because it is the only way to get our pet project off the ground. It is not just acceptable but mission-critical to demand that AI earns our trust and that we leave the training wheels and report cards in place even after it has done so.
Healthcare, safety, bank accounts and national security are at stake. So, let us start opening AI black boxes wherever we find them. p
The stakes could not be higher. Without addressing the black-box problem, not only might AI not be as beneficial as we had hoped; it might end up inflicting net harm on society. GCC governments are already addressing AI through a risk lens, but individual organisations can do their part by sticking to three basic principles.
Data transparency
Data integrity should account for the fact that initial inputs may be imperfect, but that data-cleaning can
www. intelligentcio. com INTELLIGENTCIO MIDDLE EAST 27