t cht lk
t cht lk
Rajkumar Vijayarangakannan, Lead
Network Design and DevOps, ManageEngine
Tejas Mehta, Senior Vice President and General
Manager, Qlik Middle East and Africa
Innovations in edge computing and automation have improved data centre performance, efficiency, and sustainability. When it comes to performance, edge computing plays a major role, processing data closer to the source to deliver faster results.
“ One big shift is towards smaller, more efficient AI models that are faster and cheaper to deploy. Companies are also embracing open data frameworks like Apache Iceberg to improve scalability and governance,” says Tejas Mehta, Senior Vice President and General Manager, Qlik Middle East and Africa
Businesses are moving towards flexible, cloudagnostic architectures to avoid vendor lock-in and maintain control over their data strategies.
“ The conversation now is no longer just about raw compute power; it is about architecting infrastructure that can handle the sheer scale and unpredictability of AI workloads,” says Haider Aziz, Vice President META, VAST Data.
One key trend is the move away from traditional, siloed infrastructure towards more fluid, softwaredefined architectures that allow resources to be allocated dynamically.
“ High-density computing requires liquid cooling solutions, such as direct-to-chip and immersion cooling, to manage extreme heat efficiently,” says Ian Paul, Hyperscale and Colocation Strategic Segments Director METCA, Vertiv.
Liquid cooling has become the industry standard for data centres, as it delivers superior thermal management compared to traditional air-cooling.
Intelligent power management enables reliability and energy efficiency, while grid-interactive UPS solutions help stabilise power supply. Scalability is also key, with modular data centres enabling rapid expansion.
Configuring AI data centres
AI model training requires powerful computing infrastructure, which is typical for a data centre, with larger-than-usual power and cooling demands. AI inferencing, which uses trained models to make predictions, requires less infrastructure. It can be utilised in the cloud, on-premises, and in smart devices. This makes AI utilisation well-suited for smaller edge data centres.
“ Liquid cooling has become the industry standard for data centres, as it delivers superior thermal management compared to traditional air-cooling methods,” continues Vijayarangakannan.
Each cooling methodology is tailored to specific AI workload demands. The choice of coding methods depends on factors such as power density, hardware configuration, and efficiency goals.
“ AI workloads require a unique infrastructure across primary and edge data centres. Primary data centres handle large-scale AI training with GPU- and TPU-powered clusters, ultra-fast NVMe storage, and advanced cooling, including liquid cooling, to manage high energy demands. They focus on raw computing power, scalability, and security,” says Qlik’ s Mehta.
Edge data centres, however, prioritise low latency and real-time processing by placing AI inference closer to users. They rely on energy-efficient AI accelerators, distributed micro-data hubs, and high-speed networking to reduce bottlenecks.
VAST Data’ s Aziz points out that,“ Infrastructure of new primary and edge data centres are a different breed from traditional enterprise IT setups. At the core, they revolve around sizeable clusters of GPUs or AI accelerators, connected by high-bandwidth, lowlatency networking; storage needs to keep pace.”
Architectures that minimise bottlenecks help push the industry away from legacy hierarchical storage towards flatter, high-performance models.
Edge AI, on the other hand, is about moving intelligence closer to where data is generated, whether that’ s an autonomous vehicle, a factory floor, or a city-wide surveillance network. The focus is on balancing real-time inferencing with practical constraints like power consumption, connectivity, and space.
“ Building for the future is now both the big challenge and opportunity,” adds Aziz.
“ AI-ready primary data centres feature high-density racks, 40kW +, advanced liquid cooling, direct-tochip or immersion, and high-capacity power systems with grid-interactive UPS for efficiency. They rely on scalable architectures with GPU-accelerated servers and AI-driven automation for workload optimisation,” says Vertiv’ s Paul.
80 INTELLIGENTCIO MIDDLE EAST www. intelligentcio. com