FINAL WORD
Hozefa Saylawala , Middle East Director , Zebra Technologies
Year of the Co-pilot and LLM agents
2024 will be the year of the AI agent and the self-improving AI agent . However , as AI scientist Dr . Ian Watson has noted , the AI agent is not new . What is new are the capabilities that AI agents will have thanks to LLMs . AI agents are tools that possess a level of autonomy beyond that of machine learning models or traditional computer programmes .
AI agents can sense , learn , respond , and adapt to new situations and make decisions with little human intervention . Open-source frameworks like LangChain allow LLMs to interact with software and databases to complete tasks . OpenAI has released an Assistant API which serves as an agent and launched a platform for creating custom AI agents called GPTs and a GPT Store .
Could 2024 be the year where everyone gets an AI copilot ?
Generative AI use-cases will continue to become mainstream across industries . Like GitHub ’ s Copilot for developers and Microsoft ’ s 360 Copilot for desk workers , we will see more copilots come to market , serving the needs of front-line workers in retail , healthcare and manufacturing , for example .
When it comes to LLMs , we will see small , fine-tuned , task-specific LLMs outperform LLMs like GPT , Palm , and Claude for most enterprise use-cases . This trend is expected to continue , potentially to the emergence of an industrial copilot that leverages LLMs to streamline and optimise industrial processes .
By fine-tuning models for understanding and generating language specific to industrial workflows , these LLMs can serve as intelligent assistants or copilots , offering valuable insights , automating routine tasks and enhancing overall workforce efficiency . The integration of task-specific LLMs into enterprise software and hardware holds the potential for diverse applications across various industries .
In manufacturing , these models could aid in quality control and predictive maintenance . In retail , they might augment retail assistant product knowledge , help in generating compelling product descriptions , improve online customer interactions and provide personalised shopping recommendations based on individual preferences and trends .
There will be advances in multimodal AI , with increasingly perfect content across audio , video , image and text , along with the integration of multimodal systems with robots and vehicles , as seen with the likes of Volkswagen and BWM who are introducing LLMs into car systems .
Voice , LLM-based operating systems will create new and improved ways to interact with devices such as hands-free and smart voice assistants . We are already seeing early signs of this with the recently launched Humane AI and Rabbit R1 .
Open-Source AI will mature , particularly around generative AI . The open-source landscape has grown a lot in the past 12 months , with powerful examples like Meta ’ s Llama 2 and Mistral AI ’ s Mixtral models . This could shift the dynamics of the AI landscape in 2024 by providing smaller , less resourced entities with access to sophisticated AI models and tools that were previously out of reach . make-up of their teams . Only then can we ensure AI is truly for all .
Many AI algorithms are self-learning , constantly evolving and refining their output , which , when left unchecked , could lead to the perpetuation of harmful bias , the spread of misinformation , privacy violations , security breaches , and even harm to the environment , Neeley notes .
Much of the general discussion around AI is whether these systems will replace humans in the workforce . While there are some areas where AI far exceeds human capabilities , particularly in medical and tactical military fields where its precision and insight is light years ahead , in many instances AI output is missing that much-desired human element .
Whether it is emotion and empathy in copywriting , originality in design , or personalised risk assessment in finance , AI cannot replace the individual nuances of the human workforce as yet .
In terms of future-proofing the world from the potential pitfalls of AI , global players are recognising the need for a united front . The European Union is in the process of drafting its Artificial Intelligence Act , which proposes three tiers of risk categories :
• Unacceptable risk , such as the social scoring employed by the Chinese government and famously parodied in Black Mirror
• High-risk , such as CV-scanning tools , which should be subject to specific legal requirements
• And those that are neither banned nor high-risk , which could be left unregulated
Globally , a collection of non-profits and research institutes such as the Partnership on Artificial Intelligence , the Institute for Human-Centred Artificial Intelligence , and the Responsible AI Initiative are establishing their own ethical standards , guiding companies in the use of AI to protect consumers and employees .
And while Elon Musk previously called for a pause in the creation of AI digital minds , along with Steve Wozniak , the co-founder of Apple , and Emad Mostaque , who founded London-based Stability AI , he now feels that ship has sailed .
As the co-founder of OpenAI , Musk believes the way to avoid what he describes as a Terminator future is to create an AI programme that is at least as smart as humans . In announcing his superintelligence programme xAI on Twitter in July , he said , from an AI safety standpoint , a maximally curious AI , one that is trying to understand the universe , is I think going to be pro-humanity . p
84 INTELLIGENTCIO MIDDLE EAST www . intelligentcio . com