Laurent - stock.adobe.com
Neoclouds: Meeting demand for AI acceleration
We look at how neoclouds can deliver access to artificial intelligence acceleration faster and cheaper than public cloud providers
ChatGPT, launched in 2022, began making a significant impact on the market by late 2023, according to Synergy Research Group. The company’s chief analyst, John Dinsdale, points out that cloud market leaders have experienced accelerated revenue growth over time. Additionally, the emergence of numerous neocloud companies (see box: What is a neocloud?) has further strengthened the already positive momentum in the market.
This sentiment is reflected in the Rethinking AI sovereignty whitepaper, published to coincide with the World Economic Forum, which notes that surging demand for compute is spawning new AI infrastructure development models, such as neocloud providers, national cloud providers and industry-specific artificial intelligence (AI) clouds. While hyperscalers offer global reach and full-service cloud ecosystems, neoclouds provide specialised, high-performance compute infrastructure tailored to AI training and deployment.
This surge in demand for AI acceleration has seen a surprising benefactor. According to Tiger Research, cryptocurrency mining firms, seeking to reduce their exposure to bitcoin’s volatile pricing, are redirecting their graphics processing unit (GPU) farms toward AI acceleration applications.
One example is the Australian bitcoin mining company, Iris Energy. In 2021/2022, Neel Khokhani, a Dubai-based fund manager, acquired shares in the small Australian datacentre for $1 per share. By assisting the company in leveraging its substantial physical assets to transition into an AI infrastructure provider, the share price surged to $63 by 2026. This transformation led to a $60m increase in the company’s valuation, which is now operating under the name Iren.
The role of neoclouds in digital sovereignty
Along with artificial intelligence (AI) acceleration, neocloud providers are targeting demand for digital sovereignty. From an IT derisking strategy perspective, this means organisations might feel they are relying too heavily on the platform and services of a hyperscaler.
In a recent video interview, Gartner senior director analyst Rene Buest told Computer Weekly the analyst firm is having more client conversations where IT leaders are seeking ways to diversify their cloud strategy. As such, Gartner is receiving more enquiries about sovereign clouds – local infrastructure alternatives to the hyperscalers.
“Throughout 2025, I would say 90% of my Gartner enquiries with end-user customers were only about the topic of digital sovereignty. Their concerns have increased because they don’t know what they should do or how the world will look tomorrow. They just wanted to balance the risks,” he said.
Buest said IT buyers are evaluating other types of cloud providers that can offer a higher level of sovereignty, or a level of sovereignty that their preferred hyperscaler cannot provide. And this aligns with the need to build out local and sovereign AI capabilities.
More choice
Before the emergence of neoclouds a few years ago, if an organisation wanted to work with AI, it had no choice but to go to a hyperscaler like Amazon Web Services (AWS) or Google. While the hyperscalers offer AI infrastructure as part of their vast public cloud services portfolio, Roy Illsley, chief analyst at Omdia, says the hyperscalers tend to be expensive and, as he recalls, a few years ago, there was very little choice other than Google’s AI offerings.
Analyst firm Gartner estimates that by 2030, neocloud providers will capture around 20% of the $267bn AI cloud market. Neoclouds are purpose-built cloud providers designed for GPU-intensive AI workloads. They are not a replacement for hyperscalers, but a structural correction to how AI infrastructure is built, bought and consumed. Their rise signals a deeper shift in the cloud market: AI workloads are forcing infrastructure to unbundle again.
In a recent Computer Weekly article, Mike Dorosh, a senior director analyst at Gartner, said IT buyers face three interrelated constraints, which influence their AI infrastructure decisions. First, there is what Dorosh calls cost opacity, which he says is rising as GPU pricing becomes increasingly bundled and variable, often inflated by overprovisioning and long reservation commitments that assume steady-state usage. Then there are supply bottlenecks, which he says constrain access to advanced AI accelerators. This results in long lead times, regional shortages and limited visibility into future availability. For Dorosh, the third area of concern for IT buyers is performance trade-offs, where virtualisation layers and shared tenancy reduce predictability for latency-sensitive training and inference workloads.
According to Dorosh, these pressures are no longer marginal. They create a market opening that neoclouds are designed to fill.
One example of a neocloud provider is CoreWeave, which the authors of the Rethinking AI sovereignty report say is undergoing a capacity expansion, having secured funding of $25bn since 2024. AI infrastructure buildout is also expanding through national cloud providers such as Humain (Saudi Arabia), G42 (United Arab Emirates), Outscale (France) and StackIT (Germany).
Another neocloud company that has been making headlines is Nscale, which has committed to delivering approximately 12,600 Nvidia GB300 GPUs at the Start Campus datacentre in Sines, Portugal, in the first quarter of 2026. This multi-year agreement sees Nscale offering Nvidia AI infrastructure services to Microsoft while providing European customers with sovereign AI within the European Union.
This deal builds on plans announced by Nscale and Microsoft in September 2025 to deliver the UK’s largest Nvidia AI supercomputer at Nscale’s Loughton AI Campus. The 50MW facility, scalable to 90MW, is expected to house approximately 23,000 Nvidia GB300 GPUs from the first quarter of 2027 to power Microsoft Azure services.
Gartner’s Neoclouds: The next offering arrow in the service provider quiver report notes that the consumption-based economics and transparent pricing offered by neocloud providers address the overprovisioning and hidden costs often associated with the offerings from hyperscalers. In fact, Gartner reports that by offering transparent, usage-based billing, IT buyers can expect to see cost savings of 60-70% on GPU instances compared with hyperscalers.
However, Dorosh says the more significant change is architectural rather than financial. Neoclouds encourage organisations to make explicit decisions about AI workload placement. Training, fine-tuning, inference, simulation and agent execution each have distinct performance, cost and locality requirements. Treating them as interchangeable cloud workloads is increasingly inefficient and often unnecessarily expensive.
As a result, AI infrastructure strategies are becoming inherently hybrid and multicloud by design – not as a by-product of supplier sprawl, but as a deliberate response to workload reality. The cloud market is fragmenting along functional lines, and neoclouds occupy a clear and growing role within that landscape.
Read more about neoclouds
- Sovereign cloud and AI services tipped for take-off in 2026: Digital sovereignty is set to become a top investment priority in 2026, due to geopolitical and legislative changes.
- Synergy – revenue generated by neoclouds expected to exceed $23bn in 2025: IT market watcher Synergy Research Group is predicting big things for the neocloud market through 2025 and beyond.
“Neoclouds started as GPU as a service. If you needed GPUs, these companies bought or leased GPUs from Nvidia, and then they would slice them and sell them off to people in smaller groups and bundles,” says Omdia’s Illsley.
However, over time, neocloud providers have added software stacks and developed other services to meet the demand of IT buyers who need GPU power and the software stack required for AI training or AI inferencing.
Getting started on deploying AI workloads for inference or training is arguably not as simple as the one-click option offered on something like the AWS Marketplace, Illsley says the neocloud providers are maturing to a point where they have partnered with AI software providers and can therefore offer a full set of services to meet the requirements of IT buyers who need AI compute capacity. “They are saying that they have GPUs and now provide access through partnerships to the software to run AI workloads,” he says.
As an example, CoreWeave and Nvidia recently expanded their relationship to accelerate CoreWeave’s build-out of more than 5GW of AI factory capacity by 2030. Along with the hardware commitment, according to a market insight report from Macquarie Group, the agreement shows that CoreWeave is also working with Nvidia to incorporate its AI-native software within Nvidia’s reference architectures for Nvidia’s enterprise clients and cloud partners.
One neocloud benefit identified by Gartner is access for IT buyers to specialised hardware, since neoclouds tend to prioritise cutting-edge GPUs, often securing first-to-market access through strategic partnerships. They also cater to bare-metal performance and optimised networking, since neoclouds are able to eliminate the layers of server virtualisation needed in multi-tenanted hyperscaler installations. Instead, they are able to offer direct hardware access, which Gartner says reduces latency and makes it possible to deploy high-bandwidth connectivity such as NVLink and InfiniBand for optimal GPU-to-GPU communication.
Choosing between a neocloud and a hyperscaler
While they may have begun as GPU-as-a-service type offerings, the evolution of neoclouds means there is now less of a gap between their AI services and the full-blown AI platform offerings from the hyperscalers.
Clearly, hyperscalers will eventually offer more attractive pricing to compete with neoclouds, but as Gartner senior director analyst Rene Buest points out, neocloud providers are trying to deliver more predictable pricing.
“Hyperscalers are very transparent in terms of their pricing models, so pay as you go, but at the end of the month, you don’t really know what you will pay,” he says. In other words, when using hyperscaler IT infrastructure, the monthly cost of compute resources consumed cannot be determined in advance.
IT leaders can benefit, at least in Buest’s view, from 70% cost savings by choosing a neocloud over a hyperscaler. “They also provide instant direct access to advanced GPUs, which tend to outpace the hyperscalers in speed and transparency,” he says.
Buest says neoclouds are very niche, “providing purpose-built infrastructure for AI workloads”. This not only meets customer demand today, but also suggests that neoclouds will be viable in the foreseeable future.
Khokhani’s successful investment in the former bitcoin miner Iris Energy, now known as Iren, suggests that the long-term AI capacity contracts secured by neocloud providers indicate a stable and robust business model.
He says: “People still think of Iren through a bitcoin-mining lens, but that misses what the business has become. What attracted me was the transition to long-dated, contracted datacentre infrastructure. When you have multi-year take-or-pay style contracts with an investment-grade counterparty like Microsoft, the economic risk starts to resemble infrastructure credit rather than crypto volatility.”
What is a neocloud?
A neocloud refers to a modern, next-generation cloud computing model that builds on traditional cloud infrastructure by incorporating advanced technologies, innovative architectures and enhanced capabilities.
The term is not a formal industry standard, but is often used to describe cloud-based IT infrastructure that goes beyond the conventional public, private or hybrid cloud models. Neoclouds are designed to address the evolving needs of businesses, particularly in areas like scalability, flexibility and performance.
- Advanced automation: Utilises AI and machine learning for process optimisation and resource management.
- Edge computing: Processes data closer to the source for reduced latency and faster responses.
- Multicloud and hybrid support: Integrates with multiple cloud providers and on-premise systems.
- AI and data-driven: Optimised for AI workloads, big data analytics and machine learning applications.
- Serverless computing: Enables developers to focus on applications without managing infrastructure.
- Sustainability: Emphasises energy efficiency and green datacentres.
- Enhanced security: Incorporates zero-trust architectures and real-time threat detection.
