Digital Realty CTO on AI tokenomics and datacentre infrastructure
Chris Sharp talks up the pace of AI silicon innovation, the growth of inferencing workloads, and why boasting about datacentre megawatts misses the point
The artificial intelligence (AI) boom is stretching global datacentre infrastructure to its physical limits, with next-generation silicon pushing rack densities from traditional single-digit kilowatts to one-megawatt configurations.
For datacentre operators, this poses a fundamental challenge: how do you build digital infrastructure today that can support the hardware requirements of tomorrow?
In a recent interview with Computer Weekly in Singapore, Chris Sharp, chief technology officer of Digital Realty, talks up developments in AI infrastructure, how the datacentre provider is keeping pace with chip development, and why the industry must abandon the megawatt metric in favour of tokenomics.
Editor’s note: This interview was edited for clarity and brevity.
Having just returned from Nvidia GTC 2026, how are you viewing the rapid pace of innovation in AI silicon, and how is physical datacentre infrastructure keeping up with these generational leaps?
Sharp: It’s astonishing to see the rapid pace of innovation – not just from a chipset perspective, but in the software stack. One of our biggest challenges is mapping the rapid innovation of silicon to the permanence of concrete in datacentres. It’s extremely tough. Every year, new chips dictate entirely different floor loading weights, power densities, and precision cooling requirements.
We've been aligning closely with Nvidia to handle this. For example, our Brickyard facility in Manassas, Virginia, serves as the blueprint for Nvidia’s R&D inside our datacentres. I recently told Jensen [Huang] that while physical bricks used to be the tokens of society, today AI tokens are the true basis of society. Brickyard is quite literally producing the tokens of tomorrow.
However, to produce these tokens, we must tackle severe constraints, not just power availability, but entitlements and generation. Singapore is a critical market where retrofitting existing infrastructure is key because land and power are so scarce. We look multiple years out with Nvidia to ensure our modular designs can handle their upcoming chipsets.
You also have to separate the marketing playbook from real-world operations. We’ve heard about hardware that can run fan-less or be cooled with warm water. Having been at Digital Realty for nearly 11 years, I can tell you that almost nobody wants to run warm water into their infrastructure today because of the biological risks.
While architectures like Vera Rubin and Rubin Ultra have the theoretical capability for this, we caution customers about real-world scenarios. We’ve been doing liquid cooling for over 15 years; that operational expertise sets us apart from operators who brag about having a gigawatt of power without knowing how to actually bring it to market.
Another fascinating takeaway from GTC is that there isn’t just one piece of hardware winning it all. There’s a sophisticated backplane orchestrating multiple hardware types for specific algorithms. Seeing Groq’s LPUs [language processing units] integrated into this ecosystem is exciting. Groq’s densification story is wild; they took what was a 40kW deployment and pushed it to 180kW per rack for metro deployments. Samsung’s role as a top fabricator for these chips is also a huge part of the narrative.
What’s proving out quickly is tokenomics, the watt-to-token production ratio. Even older architectures like Ampere remain highly performant for specific tasks. We see enterprises sweating their Hopper infrastructure for longer while strategically placing LPUs right next to it to handle inference
Chris Sharp, Digital Realty
At Digital Realty, we are heavily focused on inference. That’s where the true monetisation of AI happens. Enterprises are moving past training; they want private AI. They want to build the base and rent the spike. They are experimenting with agentic platforms and open-weight models like Kimi k2 and Qwen. I’ve been testing these on an Nvidia DGX Spark, as well as using the OpenClaw and Nvidia NemoClaw stacks. These models allow enterprises to run high-performance environments without burning through expensive token costs. Our Digital Realty Innovation Labs exist to demystify this for customers, so they aren’t treating AI as a science project, but instead getting guaranteed, interconnected outcomes.
Based on what you're saying, the datacentre environment is getting more complex. LPUs were originally touted as air-cooled to save water. How do you make sense of this and maintain cooling and density performance?
Sharp: It’s interesting that Groq’s original infrastructure was air-cooled, but now that it’s packaged at 180kW per rack, air cooling simply isn't an option. At that density, you have to use rear-door heat exchangers (RDHx) or direct-to-chip liquid cooling (DLC).
We want to guide customers towards the right infrastructure for their desired outcome. Not everyone is building an Nvidia environment; maybe they are gravitating towards Google’s TPUs [tensor processing units], like the new Ironwood chips. We can support all of it.
And because building a datacentre takes two to three years, we have to look ahead with leaders like Nvidia to ensure we can support upcoming one-megawatt racks, or next-generation hardware from AMD. Digital twins are great, but customers want to run multiple environments around the globe and connect them privately. That’s the frontier of agentic AI.
Agentic workflows – like what we see with OpenClaw – are bi-directional, unlike early one-shot prompts. They are built for our datacentres. That's why we launched ServiceFabric to orchestrate these complex AI workflows. And just like the shift to hybrid cloud, companies want to touch their models privately, securely, and own their token production before deploying globally across markets like Singapore or London.
Many companies are still one or two generations behind the bleeding edge. How do you help them ramp up?
Sharp: Honestly, being two cycles behind is perfectly okay. Our solution architects focus on helping customers get full lifecycle value out of existing infrastructure. I’m in the business of guaranteeing an outcome, not just selling state-of-the-art gear.
What’s proving out quickly is tokenomics, the watt-to-token production ratio. Even older architectures like Ampere remain highly performant for specific tasks. We see enterprises sweating their Hopper infrastructure for longer while strategically placing LPUs right next to it to handle inference. This changes how their infrastructure performs when servicing tokens, making their capital bets much more effective.
Given this rapid pace of change, has your customer profile shifted? Are you seeing enterprises change the way they procure capacity?
Sharp: We service the hyperscalers, which are outliers, but the enterprise profile has changed in its distribution. It’s moving closer to city centres, and the sheer capacity blocks are growing. Enterprises that would never dream of a megawatt before are now doing future projections at a megawatt or above.
They are waking up and asking: “If I buy this new chip in three years, can your environment support it?” They want to see total capacity, but they also need to know we can handle the densification. In traditional colocation, many customers are oversubscribed and can’t even procure the power they have under contract, let alone support a 180kW hotspot. It’s not a matter of if densification happens, but when.
How are you helping customers hedge against energy price volatility and geopolitical risks?
Sharp: Distribution is key – looking at multiple markets where they can hedge in that way. But very rarely have I seen a conversation with an enterprise or hyperscaler where the tokenomics – the value of the token on the other side—doesn’t outweigh any energy price increase.
The chipset sets the datacentre design, full stop. If [TSMC] are getting into sub-two nanometer, that is a very dense environment that increasingly requires a different type of packaging and system. Even if no fans are required within the actual server, you still need air to flow through the datacentre facility because there’s still a lot of heat across the board
Chris Sharp, Digital Realty
We also have a big vendor-managed inventory programme. We’ve established a foothold of DLC and RDHx capabilities across major global markets. Customers want to move fast, but they haven't always anticipated meeting that critical piece of the infrastructure requirement.
There’s also the ESG [environmental, social and governance] factor. We do a tremendous amount of work backing our power around green access through bonds, solar, and wind. I don’t know how you view small modular reactors [SMRs], but I view them as greener, and we’ve been talking with a lot of the major SMR manufacturers around the world trying to get a foothold in that space as well.
What are the most promising markets right now, given regional constraints?
Sharp: Ashburn, Virginia remains a bellwether, and Texas is very strong. But North America differs greatly from Europe or APAC. In Europe, Frankfurt and Paris are gaining massive traction. In APAC, availability zones are growing based on constraints; Malaysia may eventually service more of Singapore’s demand due to local power limits.
Across all regions, there is a hockey stick of demand. I have developers doing hundreds of millions of tokens a day. We’re also starting to see tokens have proximity value. A token can go through multiple environments and reasoning steps to emerge as a high-value token. That’s why the connectivity piece is so important.
Should we stop measuring datacentre capacity solely in megawatts?
Sharp: Yes, it’s not just total capacity. It’s the ability to densify, the connectivity, and the resource efficiency like water usage. The watts-to-token production is one element of it, but you have to dig into that longer-term view. We could sell through all of our capacity quickly if we just wanted to go turn up all these neoclouds and other customers which may not have longer-term, durable financial backing. We watch that credit risk landscape carefully. There’s a lot more than just the total capacity block that you must watch to be successful.
How are you working with the ecosystem to chart the blueprint for deploying next-generation AI infrastructure before they even hit the market?
Sharp: One of the biggest fears with Vladimir Troy [Nvidia’s vice-president of AI infrastructure] and his R&D team is building a bunch of chips that sit in a warehouse. That’s why we’re watching how our modularity aligns with what their next iteration is going to be. You’re going to see one-megawatt racks. They are going to be a little bit larger than traditional racks, and they are going to be extremely heavy. But that’s the frontier they’ve been putting us on for the last five years.
I also spend a lot of time with engineers, especially with the TSMC team, looking at where they’re headed in nanometer scale. The chipset sets the datacentre design, full stop. If they are getting into sub-two nanometer, that is a very dense environment that increasingly requires a different type of packaging and system. Even if no fans are required within the actual server, you still need air to flow through the datacentre facility because there’s still a lot of heat across the board.
There’s also a lot of noise around different fabrication methodologies coming to market, but I don’t know if it’s going to be as material. My comfort level is, if I can come in around 250kW a rack or less in a very repeatable fashion, our customers will have ample runway to be successful.
Read more about AI in APAC
DayOne and Cortical Labs are bringing ‘wetware’ computing to Singapore, using living neurons grown from stem cells to support the demand for AI while addressing sustainability concerns.
Singtel and Nvidia have teamed up on a multimillion-dollar facility to help organisations scale enterprise AI deployments, tackle extreme datacentre power densities, and prepare for the era of embodied AI.
Following the viral success of OpenClaw and product launches from Nvidia and Tencent, Alibaba has unveiled an agentic AI platform that integrates with DingTalk to orchestrate business workflows.