Expert Insights from: Stephane Duproz, Data Center CEO/MD (25 years, Europe & Africa), and Ramez Dandan, Former CTO, Microsoft Middle East
The data center of 2035 will bear little resemblance to the facilities operating today. Two seasoned industry leaders, drawing from decades of experience across Europe, Africa, and the Middle East, describe a future defined by energy self-sufficiency, water independence, purpose-built specialization, and a recalibrated relationship between artificial intelligence and human operations.
These are not speculative projections. They are informed extrapolations from trends already observable in the market, grounded in the practical experience of professionals who have built, operated, and scaled data center facilities across radically different environments. In separate interviews with Infoquest, Stephane Duproz and Ramez Dandan offered converging visions that, taken together, paint a detailed picture of where the industry is heading.
Sustainability Reframed: Data Centers as Carbon Reduction Tools
The prevailing narrative, that data centers are energy-hungry environmental liabilities, is, according to Duproz, fundamentally misleading. His argument is structural: by consolidating digital infrastructure that would otherwise be distributed across hundreds of individual operations, data centers achieve efficiency through mutualization, dramatically reducing the collective carbon footprint of the digital economy.
“It is not the data centers that are using a lot of power. It is the digital world. Data centers actually reduce the amount of power used by the digital world because they gather and mutualize equipment.” — Stephane Duproz
Duproz illustrates with operational experience: when he served as CEO for Africa Data Centers, the company served 400 customers from shared facilities. Without that consolidation, each customer would have required their own infrastructure, including redundant generator capacity, multiplying equipment, fuel consumption, and emissions by orders of magnitude.
This reframing does not diminish the urgency of improving data center energy efficiency. Rather, it recontextualizes the industry’s role: data centers are tools of carbon reduction, not its primary drivers. The imperative is to make those tools as efficient as possible.
Energy Independence: The Off-Grid Data Center
Both experts converge on a striking prediction: the data center of the future will produce its own power and potentially return surplus energy to the grid. This represents a fundamental shift from the current model, where data centers are among the largest consumers on local electrical grids.
“In my view, the future data center is going to be off-grid. It will produce its own power and potentially overproduce and give some of it back to the grid. That can be with small nuclear plants.” — Stephane Duproz
Dandan reaches a similar conclusion from the GCC perspective. While acknowledging that solar energy is abundant in the region, he questions whether it can scale sufficiently to meet the demands of AI-era data center operations. Both experts point toward small modular nuclear reactors as a likely component of the long-term energy solution.
“Solar is great, but is it enough? I think eventually we will have to rely on things like nuclear energy—modular nuclear reactors attached to data centers.” — Ramez Dandan
The small modular reactor concept is gaining traction globally. Microsoft, Google, and Amazon have all announced agreements or investments related to nuclear power for data center operations. While the GCC does not currently have nuclear generation infrastructure dedicated to data centers, the trajectory of global adoption suggests it is a question of when, not whether, this technology reaches the region.
Water Independence: A Non-Negotiable Design Principle
In a forward-looking industry discussion dominated by power efficiency metrics, Duproz raises an issue he believes deserves equal attention: water consumption. Many current cooling technologies achieve power efficiency gains at the cost of significant water usage, a trade-off he considers unacceptable.
“We should be as prudent and efficient in the way we use water as in the way we use power. Water is likely to become a scarcer and scarcer resource.” — Stephane Duproz
This is not merely an ethical position; it is an operational design principle that Duproz has already implemented. At a newly built facility in Cape Town, his team deployed a device that extracted moisture from the air to fill the facility’s closed-loop cooling circuits, eliminating the need for an external water supply. The process took approximately one month, after which the facility operated with zero ongoing water consumption and actually produced surplus water for the surrounding community.
For the GCC, where water scarcity is a fundamental constraint, this principle has particular relevance. Data center designs that achieve cooling efficiency at the expense of water consumption may solve one problem while creating another. The future design standard, according to Duproz, will require solutions that optimize for both resources simultaneously.
The Campus Model: Specialized Facilities for Specialized Workloads
Both experts reject the one-size-fits-all approach to data center design. The future, as Duproz envisions it, will be organized around large campuses housing multiple specialized facility types, each optimized for distinct workloads.
“You don’t build the same data center if it is for cloud or if it is for AI or if it is for connectivity hubs. You build it completely differently. One size fits all is not going to work.” — Stephane Duproz
This campus model would include dedicated facilities for cloud computing, AI training, AI inference, and connectivity exchange, each with distinct designs optimized for their specific power, cooling, network, and physical requirements. These would operate as an integrated ecosystem within the campus, sharing common power generation and management infrastructure while maintaining specialized internal configurations.
At a broader geographic level, Duproz envisions a stellar organizational pattern: large connectivity hubs at the center, with content and edge data centers orbiting around them in proximity to user populations. This model would serve different latency requirements, connectivity exchange requiring centralization, content delivery, and edge compute requiring geographic distribution.
Hub
AI’s Geographic Divide: Training Anywhere, Inference Everywhere
One of the most strategically significant distinctions both experts draw is between AI training and AI inference workloads, and their dramatically different geographic requirements.
“Only inference needs to be close to the user. That big component of AI, being the training, doesn’t need to be close to the user and can be anywhere. So it will go where there is power, where the power is cheap, and where it is green.” — Stephane Duproz
AI training, the computationally intensive process of developing models, requires massive capacity but is latency-insensitive. Users querying AI systems are willing to wait seconds or minutes for responses, meaning training facilities can be located anywhere with abundant, affordable, green power. This opens opportunities for countries with energy advantages but limited proximity to major user populations, from Norway to Uganda.
AI inference, the interface between trained models and end users, is latency-sensitive and must be located closer to users. As AI applications become more personalized and interactive, inference workloads are expected to grow substantially, driving demand for distributed capacity across all markets, including emerging ones.
Dandan reinforces this from the GCC perspective, noting that AI is already being deployed to optimize data center operations through sensor data analysis, predictive maintenance, and increasingly automated management systems. This dual role, AI as both workload and operational tool, creates a feedback loop that will accelerate both adoption and demand.
The Persistent Human Element
Despite the trajectory toward increased automation, both experts push back against the narrative of fully autonomous data centers. Duproz is particularly emphatic on this point.
“Contrary to what you might expect, I still believe that in 10 years there will be a lot of human technicians dealing with data centers as opposed to AI.” — Stephane Duproz
His reasoning is rooted in risk management. AI can assist in detecting issues earlier and improving diagnostic understanding, but the intervention itself, the physical action taken on critical infrastructure, demands the judgment, contextual understanding, and accountability that human operators provide. In an environment where a single operational error can affect hundreds of tenants and their downstream users, the cost of an AI misjudgment is too high to accept without human oversight.
Dandan’s perspective is more evolutionary: he sees AI taking over an increasing share of monitoring and decision support, with robotics potentially handling routine physical tasks, but acknowledges that human oversight will remain essential for the foreseeable future. The difference is one of degree rather than direction; both agree that the fully autonomous data center remains further away than popular narratives suggest.
The Cloud Catch-Up: A Coming Investment Wave
Duproz identifies a near-term dynamic that has significant implications for data center demand globally and in emerging markets specifically. Over the past two years, hyperscale cloud providers have redirected investment disproportionately toward AI infrastructure, creating an underinvestment in cloud deployment—particularly in emerging markets.
“The hyperscalers have directed their investment money towards AI quite a lot. They will have to catch up with those two years of less development. In emerging countries, the catch-up is going to be even stronger.” — Stephane Duproz
This catch-up dynamic could drive a significant wave of cloud infrastructure investment in the near term, creating opportunity for both data center operators and the broader ecosystem of contractors, equipment suppliers, and service providers that support deployments. For the GCC and other emerging markets, this represents a potential acceleration in the already strong growth trajectory.
Specialization Without Fragmentation
Both experts address the question of whether market specialization, companies focused exclusively on AI data centers or cloud data centers, is the likely future. The consensus is nuanced: design must specialize, but companies need not.
“Certainly, data center design will specialize based on its usage. Now, a single company can do everything. If you look at the dominant players—Equinix and Digital Realty—they have the capacity to do everything.” — Stephane Duproz
This means that the competitive differentiator will not be whether a company can build AI-ready or cloud-optimized, or connectivity-focused facilities, but how well they integrate specialized designs into coherent campus-level operations that serve the full spectrum of customer requirements.
Implications for Investors and Operators
The future data center, as described by two of the industry’s most experienced practitioners, is self-powered, water-independent, purpose-built by workload type, organized into integrated campuses, and still fundamentally dependent on human operational excellence. It leverages AI for optimization and serves AI workloads at unprecedented scale, but maintains human accountability at every critical decision point.
For investors, this vision demands a longer investment horizon and a more sophisticated evaluation framework than simple capacity metrics. The winners will be operators who master the integration of specialized designs, achieve energy self-sufficiency, and build teams capable of sustained operational excellence. For operators, the message is equally clear: plan disproportionately, invest in people, and design for the workloads of the next decade rather than the last one.
About the Experts
Stephane Duproz has spent 25 years leading data center operators at the managing director and CEO level across Europe and Africa. He served as chairman of the European Data Center Association and is vice president of the Africa Data Center Association. He currently provides advisory services to investors and operators in data center infrastructure.
Ramez Dandan is a technology strategist with over 30 years of experience in IT and telecommunications across the Middle East. He served as Chief Technology Officer at Microsoft for the last seven years of his 17-year tenure, where he led the establishment of Microsoft’s first regional data center in the Middle East.
Access Expert Insights Through Infoquest
This article draws on expert consultations conducted through the Infoquest expert network. Organizations seeking deeper analysis of data center sustainability strategy, future infrastructure design, or AI workload planning can access direct consultations with industry experts through Infoquest.