Expert Insights Powered by Ramez Dandan: Former CTO, Microsoft Middle East | 30+ years in IT & Telecom.
The data center facilities that served the GCC region’s first wave of cloud adoption are inadequate for what comes next. This highlights the exponential growth of AI workloads, with their extreme power-density requirements, specialized cooling demands, and fundamentally different hardware architectures, which are forcing a comprehensive rethink of how data centers in the Gulf are designed, built, and operated.
This is not a marginal upgrade cycle. It is a design paradigm shift in which facilities originally built for general-purpose cloud compute are being supplemented, and in some cases replaced, by purpose-built AI data centers that look fundamentally different from their predecessors.
Ramez Dandan, who led the establishment of Microsoft’s first Middle East data center region, has observed this transition from the inside. His perspective shares both the technical and strategic dimensions of this redesign wave.
Modularity and Scalability: The New Design Imperative
The shift toward modular and scalable data centers has been underway for several years, but AI has accelerated this. The economics are straightforward: building out full capacity upfront and waiting for tenants to arrive is financially untenable. Therefore, modular designs allow operators to add capacity incrementally, aligned with actual demand.
“It doesn’t make economic sense to build out everything and then wait for tenants to come in and occupy it. If you had a modular design, you could fairly quickly add capacity based on demand with a shorter lead time.”
Dandan attributes much of this design evolution to the influence of hyperscalers entering the region. When Microsoft, AWS, and Google Cloud began deploying in the GCC, their facility requirements, particularly around scalability, redundancy, and power distribution, raised the standard for the entire market.
“Their design elements, their DC design requirements, they taught the operators a lot. And that knowledge went into the design of the next facility.”
Purpose-Built for AI: A New Category of Facility
The most significant design shift is the emergence of facilities explicitly designed for AI workloads. Traditional data centers were built around relatively uniform server racks with predictable power and cooling profiles. AI workloads, particularly those running GPU clusters for training and inference, present a fundamentally different challenge.
“With AI workloads and the kind of hardware that requires higher density server racks has always been the trajectory, but this was accelerated with the adoption of AI.”
Dandan notes that new facilities in the region are increasingly being designed from the ground up for AI workloads. While retrofitting is possible for some installations, purpose-built design delivers significantly better performance and efficiency outcomes.
“I’m seeing more and more new facilities that are AI-designed, designed for AI specifically.”
According to the Uptime Institute, AI-optimized racks can demand 40 to 100 kW per rack, compared to the 6 to 10 kW typical of traditional enterprise workloads. This tenfold increase in power density cascades into every aspect of facility design.
Traditional vs AI-Optimized Data Center Rack Layout
Comparison of rack power density, cooling integration, and infrastructure architecture
Key Design Shifts: Traditional → AI-Era
The Cooling Revolution: From Air to Liquid to On-Chip
Cooling is where the AI design challenge becomes most acute. Traditional air-based cooling systems are insufficient for the heat densities generated by modern GPU clusters. The industry response has been a rapid pivot toward advanced cooling, and the GCC market is actively adopting these innovations.
“Liquid cooling, on-chip cooling, you can cool the chip itself, not just the ambient heat. There are a lot of innovations on how to keep those fast processors humming along and not overheat and fail.”
Liquid cooling systems, including direct-to-chip cooling, rear-door heat exchangers, and full immersion cooling, are being incorporated. Dandan suggests that gas-based cooling may eventually enter the picture as well, though this remains further from commercial deployment.
Critically, the global supply chain for these technologies is accessible to GCC operators. The same vendors deploying advanced cooling solutions in North America, Western Europe, and the Asia Pacific have local presence in the Gulf, meaning adoption is primarily a question of procurement timing rather than availability.
Data Center Cooling Technology Evolution
From ambient air management to full immersion — four generations of thermal engineering
AI Operating Data Centers: The Feedback Loop
Beyond being housed in data centers, AI is increasingly being deployed to manage and optimize data center operations themselves. Modern data center facilities generate enormous volumes of environmental, power, and performance data from thousands of sensors monitoring conditions continuously. This makes them, as Dandan describes it, a textbook case for AI application.
“Data centers are extremely smart buildings. Everything is monitored 24/7, logged, and analyzed. Lots of data, critical infrastructure, and decisions have to be taken. That’s a textbook case for AI: automating complex tasks based on lots of data.”
AI-enabled tools are being integrated into power management, security monitoring, and predictive maintenance systems. Dandan suggests that an AI-controlled data center could be achievable in the near term, with robotic systems handling the physical tasks.
The Edge Computing Gap: An Untapped GCC Opportunity
While edge computing accounts for approximately 30 percent of global cloud compute capacity, Dandan estimates its penetration in the GCC at less than 10 percent. This gap represents both a maturity indicator and a significant investment opportunity.
“The utilization of edge computing is on the rise in the GCC, and I think that’s an area that the operators need to focus on. Or it could be an opportunity for a new, fresh set of operators to come in.”
The drivers for edge deployment are low-latency applications, IoT networks, smart city infrastructure, and AI inference. As 5G rollouts mature and AI inference workloads grow, the demand for edge capacity is expected to converge toward global norms.
Edge Computing Penetration: Global vs GCC
Edge computing as a share of total cloud compute capacity — and the GCC opportunity gap
in US, EU, APAC
rapid growth expected
— Ramez Dandan, Former Microsoft CTO MENA
Power and Sustainability: The Nuclear Question
The energy demands of AI-era data centers amplify an already critical sustainability challenge. In the GCC, solar energy is the most abundant renewable resource, but Dandan notes that integration has historically been imperfect; solar farms were often located too far from data center facilities or were not connected to the relevant electrical sub-grids.
These technical barriers are being addressed, but the question remains: is solar sufficient for what AI-era data centers require?
“Solar is great, but is it enough? I think eventually we will have to rely on things like nuclear energy, modular nuclear reactors attached to data centers.”
The concept of small modular reactors dedicated to data center campuses is gaining serious attention globally, with projects under development in the United States and Europe. Whether and how this technology is adopted in the GCC, a region without existing nuclear power generation infrastructure, remains an open question, but one that industry leaders are actively discussing.
The Redesign Imperative
The GCC data center market is at an inflection point. The facilities that enabled the region’s first wave of cloud adoption were a necessary starting point, but the AI era demands a fundamentally different physical infrastructure, one characterized by higher power densities, advanced cooling systems, modular scalability, and increasingly autonomous operations.
For operators and investors, the implication is clear: the competitive advantage will belong to those who design for the next decade’s workloads rather than optimizing for the last decade’s requirements. In a region growing at 20 to 27 percent annually, the cost of getting this design equation right, or wrong, is substantial.
About the Expert
Ramez Dandan is a technology strategist with over 30 years of experience in IT and telecommunications across the Middle East. He served as CTO at Microsoft, where he led the establishment of Microsoft’s first Middle East data center. His expertise spans digital transformation, cloud infrastructure, and technology policy advisory for enterprise and government clients across the GCC.
Access Expert Insights Through Infoquest
This article is based on an expert consultation conducted through the Infoquest expert network. Organizations seeking deeper analysis of GCC data center infrastructure, investment due diligence, or direct access to experts like Ramez Dandan can reach out to us below.