Expert Insights from: Stephane Duproz, Data Center CEO/MD (25 years, Europe & Africa), and Ramez Dandan, Former CTO, Microsoft Middle East

The data center of 2035 will bear little resemblance to the facilities operating today. Two seasoned industry leaders, drawing from decades of experience across Europe, Africa, and the Middle East, describe a future defined by energy self-sufficiency, water independence, purpose-built specialization, and a recalibrated relationship between artificial intelligence and human operations.

These are not speculative projections. They are informed extrapolations from trends already observable in the market, grounded in the practical experience of professionals who have built, operated, and scaled data center facilities across radically different environments. In separate interviews with Infoquest, Stephane Duproz and Ramez Dandan offered converging visions that, taken together, paint a detailed picture of where the industry is heading.

Sustainability Reframed: Data Centers as Carbon Reduction Tools

The prevailing narrative, that data centers are energy-hungry environmental liabilities, is, according to Duproz, fundamentally misleading. His argument is structural: by consolidating digital infrastructure that would otherwise be distributed across hundreds of individual operations, data centers achieve efficiency through mutualization, dramatically reducing the collective carbon footprint of the digital economy.

“It is not the data centers that are using a lot of power. It is the digital world. Data centers actually reduce the amount of power used by the digital world because they gather and mutualize equipment.” — Stephane Duproz

Duproz illustrates with operational experience: when he served as CEO for Africa Data Centers, the company served 400 customers from shared facilities. Without that consolidation, each customer would have required their own infrastructure, including redundant generator capacity, multiplying equipment, fuel consumption, and emissions by orders of magnitude.

This reframing does not diminish the urgency of improving data center energy efficiency. Rather, it recontextualizes the industry’s role: data centers are tools of carbon reduction, not its primary drivers. The imperative is to make those tools as efficient as possible.

Energy Independence: The Off-Grid Data Center

Both experts converge on a striking prediction: the data center of the future will produce its own power and potentially return surplus energy to the grid. This represents a fundamental shift from the current model, where data centers are among the largest consumers on local electrical grids.

“In my view, the future data center is going to be off-grid. It will produce its own power and potentially overproduce and give some of it back to the grid. That can be with small nuclear plants.” — Stephane Duproz

Dandan reaches a similar conclusion from the GCC perspective. While acknowledging that solar energy is abundant in the region, he questions whether it can scale sufficiently to meet the demands of AI-era data center operations. Both experts point toward small modular nuclear reactors as a likely component of the long-term energy solution.

“Solar is great, but is it enough? I think eventually we will have to rely on things like nuclear energy—modular nuclear reactors attached to data centers.” — Ramez Dandan

The small modular reactor concept is gaining traction globally. Microsoft, Google, and Amazon have all announced agreements or investments related to nuclear power for data center operations. While the GCC does not currently have nuclear generation infrastructure dedicated to data centers, the trajectory of global adoption suggests it is a question of when, not whether, this technology reaches the region.

The Self-Sufficient Data Center: Energy & Water Independence
How future data centers will produce their own power, capture their own water, and potentially give back surplus to local communities
⚡ Energy Independence Cycle
☀️
On-Site Renewable Generation
Solar arrays, wind installations, and geothermal systems provide baseline clean power directly to the campus.
Available Now
⚛️
Small Modular Nuclear Reactors
Compact nuclear plants attached directly to data center campuses. Provides reliable, carbon-free baseload power independent of grid constraints.
Emerging 2028–2032
🔋
Battery & Storage Buffer
On-site energy storage smooths supply fluctuations and ensures uninterrupted power during source transitions. Replaces diesel backup generators.
Scaling Now
🔄
Surplus Returned to Grid
Overproduction fed back into the local grid, transforming the data center from a power consumer into a community energy contributor.
Future Vision
🏭
SELF-SUFFICIENT CAMPUS
💧 Water Independence Cycle
🌫️
Atmospheric Moisture Capture
Specialized devices extract water from humidity in the air. Eliminates dependency on municipal water supply entirely. Proven in Cape Town facility.
Proven in Practice
🔄
Closed-Loop Cooling Circuit
Water circulates in a sealed system — never consumed, only recirculated. Once filled, the system operates indefinitely without external water input.
Industry Standard
❄️
Advanced Cooling (No Water Waste)
Next-gen cooling technologies that improve power efficiency without wasting water — rejecting the current trade-off of water consumption for energy savings.
Evolving Technologies
🏘️
Surplus Water to Communities
Excess captured water donated to surrounding communities. Data center becomes a net positive water contributor — already demonstrated in Africa.
Done in Cape Town
🇿🇦 Real-World Case: Cape Town Water Independence
When Africa Data Centres built their Cape Town facility, they installed atmospheric moisture capture devices to fill their closed-loop cooling circuits without drawing from the municipal water supply. It took about a month to fill the system — but once filled, the facility needed zero external water. They continued producing water with the device and donated the surplus to local communities.
“We didn’t need any water. And actually we continue producing water with that device that we give to the local communities.” — Stephane Duproz
Net Impact by 2035
Net energy producer — surplus power returned to grid
💧Net water contributor — excess donated to communities
🌍Carbon-negative — clean power + mutualized equipment
🔌Grid-independent — no dependency on local infrastructure
🤝Community asset — not just consumer, but contributor

Water Independence: A Non-Negotiable Design Principle

In a forward-looking industry discussion dominated by power efficiency metrics, Duproz raises an issue he believes deserves equal attention: water consumption. Many current cooling technologies achieve power efficiency gains at the cost of significant water usage, a trade-off he considers unacceptable.

“We should be as prudent and efficient in the way we use water as in the way we use power. Water is likely to become a scarcer and scarcer resource.” — Stephane Duproz

This is not merely an ethical position; it is an operational design principle that Duproz has already implemented. At a newly built facility in Cape Town, his team deployed a device that extracted moisture from the air to fill the facility’s closed-loop cooling circuits, eliminating the need for an external water supply. The process took approximately one month, after which the facility operated with zero ongoing water consumption and actually produced surplus water for the surrounding community.

For the GCC, where water scarcity is a fundamental constraint, this principle has particular relevance. Data center designs that achieve cooling efficiency at the expense of water consumption may solve one problem while creating another. The future design standard, according to Duproz, will require solutions that optimize for both resources simultaneously.

The Campus Model: Specialized Facilities for Specialized Workloads

Both experts reject the one-size-fits-all approach to data center design. The future, as Duproz envisions it, will be organized around large campuses housing multiple specialized facility types, each optimized for distinct workloads.

“You don’t build the same data center if it is for cloud or if it is for AI or if it is for connectivity hubs. You build it completely differently. One size fits all is not going to work.” — Stephane Duproz

This campus model would include dedicated facilities for cloud computing, AI training, AI inference, and connectivity exchange, each with distinct designs optimized for their specific power, cooling, network, and physical requirements. These would operate as an integrated ecosystem within the campus, sharing common power generation and management infrastructure while maintaining specialized internal configurations.

At a broader geographic level, Duproz envisions a stellar organizational pattern: large connectivity hubs at the center, with content and edge data centers orbiting around them in proximity to user populations. This model would serve different latency requirements, connectivity exchange requiring centralization, content delivery, and edge compute requiring geographic distribution.

The Future Data Center Campus Model
Specialized facility zones organized around a central connectivity hub — each built differently for its specific workload, orbiting a shared digital roundabout
🌐
Connectivity
Hub
Internet exchange · Network interconnection · Digital roundabout
🧠AI Training Facility
Massive GPU clusters for model training. Location-flexible — follows cheap, green power. Liquid/immersion cooling. 80–100+ kW per rack.
High Density · Location Flexible
🔗AI Inference Zone
User-facing AI processing. Must be close to end users for low latency. Rapidly growing as AI becomes more personalized.
Low Latency · User Proximity
📡Edge Satellites
Smaller modular facilities orbiting the campus. Positioned closer to users for content delivery, IoT, and real-time applications.
Modular · Distributed · Content
☁️Cloud & Colocation
Enterprise workloads, SaaS hosting, hybrid cloud deployments. The core infrastructure layer that continues to grow as businesses go cloud-first.
Enterprise · Hybrid · Scalable
Self-PoweredOn-site generation · Small nuclear · Excess returned to grid
💧
Water IndependentClosed-loop cooling · Atmospheric moisture capture · Zero external draw
🏗️
Purpose-BuiltEach zone designed for its specific workload — one size does not fit all
👤
Human-Led OperationsAI assists with detection and diagnosis but technicians handle interventions
“You don’t build the same data center if it is for cloud or if it is for AI or if it is for connectivity hubs — you build it completely differently. I see campuses with various components working together, with content data centers orbiting central connectivity hubs.”
— Stephane Duproz, former CEO of Africa Data Centres

AI’s Geographic Divide: Training Anywhere, Inference Everywhere

One of the most strategically significant distinctions both experts draw is between AI training and AI inference workloads, and their dramatically different geographic requirements.

“Only inference needs to be close to the user. That big component of AI, being the training, doesn’t need to be close to the user and can be anywhere. So it will go where there is power, where the power is cheap, and where it is green.” — Stephane Duproz

AI training, the computationally intensive process of developing models, requires massive capacity but is latency-insensitive. Users querying AI systems are willing to wait seconds or minutes for responses, meaning training facilities can be located anywhere with abundant, affordable, green power. This opens opportunities for countries with energy advantages but limited proximity to major user populations, from Norway to Uganda.

AI inference, the interface between trained models and end users, is latency-sensitive and must be located closer to users. As AI applications become more personalized and interactive, inference workloads are expected to grow substantially, driving demand for distributed capacity across all markets, including emerging ones.

Dandan reinforces this from the GCC perspective, noting that AI is already being deployed to optimize data center operations through sensor data analysis, predictive maintenance, and increasingly automated management systems. This dual role, AI as both workload and operational tool, creates a feedback loop that will accelerate both adoption and demand.

AI Training vs. Inference: Two Very Different Data Center Needs
Understanding why AI workloads split into two distinct infrastructure requirements — and what that means for where data centers get built
🧠
AI Training
Where almost all the money is going right now
📍
Location Strategy
Anywhere in the World
Can be Norway, Uganda, or anywhere with cheap green power. Not location-specific.
Power Requirement
Massive — 100+ MW Clusters
Huge GPU farms consuming enormous power. Follows cheapest, greenest supply.
⏱️
Latency Tolerance
High — Seconds to Minutes OK
Users are willing to wait for quality answers. No low-latency requirement.
🏗️
Facility Design
Purpose-Built AI Clusters
Specialized cooling (liquid/immersion), high-density racks, massive interconnect fabric.
📊
Current Investment Share
~85–90% of AI CapEx
The dominant use case — absorbing most hyperscaler AI investment right now.
VS
💬
AI Inference
Smaller today — but growing rapidly as AI personalizes
📍
Location Strategy
Close to the User
Must be near end users for responsive interaction. Location is critical.
Power Requirement
Moderate — 5–50 MW
Smaller footprint per site but distributed across many locations globally.
⏱️
Latency Tolerance
Low — Milliseconds Matter
Real-time interaction demands proximity. The user-AI link becomes increasingly important.
🏗️
Facility Design
Edge-Optimized or Hybrid
Can be colocation, edge facilities, or dedicated inference pods within existing campuses.
📊
Growth Trajectory
Fastest-Growing Segment
As AI becomes more personalized, inference demand multiplies everywhere — including emerging markets.
🔴 What This Means for Emerging Markets
Emerging markets are unlikely to attract large AI training facilities in the near term — those follow cheap, abundant green power. But inference will come everywhere as AI personalizes. The real opportunity for GCC and African markets is positioning for the inference wave.
🟣 The Personalization Multiplier
As AI moves from generic responses to personalized interactions, the importance of the link between user and inference layer grows exponentially. This is what will drive inference deployment into every market — and with it, a new wave of data center demand.
85–90%
Current AI CapEx
Goes to training infrastructure — concentrated in a few locations globally
10–15%
Current Inference CapEx
Small today but distributed — and growing fast as AI reaches every user
40–50%
Projected Inference Share by 2030
Inference expected to approach parity as personalized AI becomes mainstream

The Persistent Human Element

Despite the trajectory toward increased automation, both experts push back against the narrative of fully autonomous data centers. Duproz is particularly emphatic on this point.

“Contrary to what you might expect, I still believe that in 10 years there will be a lot of human technicians dealing with data centers as opposed to AI.” — Stephane Duproz

His reasoning is rooted in risk management. AI can assist in detecting issues earlier and improving diagnostic understanding, but the intervention itself, the physical action taken on critical infrastructure, demands the judgment, contextual understanding, and accountability that human operators provide. In an environment where a single operational error can affect hundreds of tenants and their downstream users, the cost of an AI misjudgment is too high to accept without human oversight.

Dandan’s perspective is more evolutionary: he sees AI taking over an increasing share of monitoring and decision support, with robotics potentially handling routine physical tasks, but acknowledges that human oversight will remain essential for the foreseeable future. The difference is one of degree rather than direction; both agree that the fully autonomous data center remains further away than popular narratives suggest.

The Cloud Catch-Up: A Coming Investment Wave

Duproz identifies a near-term dynamic that has significant implications for data center demand globally and in emerging markets specifically. Over the past two years, hyperscale cloud providers have redirected investment disproportionately toward AI infrastructure, creating an underinvestment in cloud deployment—particularly in emerging markets.

“The hyperscalers have directed their investment money towards AI quite a lot. They will have to catch up with those two years of less development. In emerging countries, the catch-up is going to be even stronger.” — Stephane Duproz

This catch-up dynamic could drive a significant wave of cloud infrastructure investment in the near term, creating opportunity for both data center operators and the broader ecosystem of contractors, equipment suppliers, and service providers that support deployments. For the GCC and other emerging markets, this represents a potential acceleration in the already strong growth trajectory.

Specialization Without Fragmentation

Both experts address the question of whether market specialization, companies focused exclusively on AI data centers or cloud data centers, is the likely future. The consensus is nuanced: design must specialize, but companies need not.

“Certainly, data center design will specialize based on its usage. Now, a single company can do everything. If you look at the dominant players—Equinix and Digital Realty—they have the capacity to do everything.” — Stephane Duproz

This means that the competitive differentiator will not be whether a company can build AI-ready or cloud-optimized, or connectivity-focused facilities, but how well they integrate specialized designs into coherent campus-level operations that serve the full spectrum of customer requirements.

Implications for Investors and Operators

The future data center, as described by two of the industry’s most experienced practitioners, is self-powered, water-independent, purpose-built by workload type, organized into integrated campuses, and still fundamentally dependent on human operational excellence. It leverages AI for optimization and serves AI workloads at unprecedented scale, but maintains human accountability at every critical decision point.

For investors, this vision demands a longer investment horizon and a more sophisticated evaluation framework than simple capacity metrics. The winners will be operators who master the integration of specialized designs, achieve energy self-sufficiency, and build teams capable of sustained operational excellence. For operators, the message is equally clear: plan disproportionately, invest in people, and design for the workloads of the next decade rather than the last one.

The Evolution of Data Center Design: 2000 → 2035
From shared server rooms to self-powered, water-independent campuses with specialized zones
2000–2008
The Server Room Era
2009–2016
Cloud Revolution
2017–2023
Hyperscale Boom
2024–2028
AI Transformation
2029–2035
Self-Sufficient Campus
🏢Colo & Server Rooms
Companies house own servers in safe, connected facilities
Telco-owned or repurposed buildings
Internet exchanging creates first digital hubs
2–6 kW/rack Air-cooled
☁️Cloud Takes Over
AWS, Azure, GCP trigger massive demand
Snowball effect: more content → more connectivity → more growth
First purpose-built facilities emerge
6–12 kW/rack Hot/cold aisle
📈Hyperscale Expansion
GCC gets first hyperscale regions (Azure UAE 2019)
Modularity and scalability become standard
Edge computing reaches ~30% of global cloud
12–25 kW/rack Liquid cooling pilots
🧠AI Reshapes Everything
GPU clusters demand 40–100+ kW per rack
Training vs. inference split drives new geography
Investment shifts from cloud to AI, creating catch-up debt
40–100 kW/rack Immersion cooling
🔮Self-Sufficient Campus
Off-grid: own power via small nuclear, excess to grid
Water-independent: atmospheric moisture capture
Specialized zones: AI, cloud, connectivity, edge orbiting hubs
100+ kW/rack Zero external water
Power Source
Grid-dependent
Self-generating off-grid
Water Usage
External water supply
Closed-loop atmospheric capture
Design Approach
One size fits all
Purpose-built specialized zones
Operations
Fully manual monitoring
AI-assisted human-led intervention

About the Experts

Stephane Duproz has spent 25 years leading data center operators at the managing director and CEO level across Europe and Africa. He served as chairman of the European Data Center Association and is vice president of the Africa Data Center Association. He currently provides advisory services to investors and operators in data center infrastructure.

Ramez Dandan is a technology strategist with over 30 years of experience in IT and telecommunications across the Middle East. He served as Chief Technology Officer at Microsoft for the last seven years of his 17-year tenure, where he led the establishment of Microsoft’s first regional data center in the Middle East.

Access Expert Insights Through Infoquest

This article draws on expert consultations conducted through the Infoquest expert network. Organizations seeking deeper analysis of data center sustainability strategy, future infrastructure design, or AI workload planning can access direct consultations with industry experts through Infoquest.