sdecoret - stock.adobe.com

Nvidia scores Meta and Oracle for networking fabric

Nvidia lands hyperscalers Meta and Oracle for its Spectrum-X networking fabric and is leading an industry coalition to adopt a new 800-volt DC power standard for datacentres

Nvidia has used the 2025 Open Compute Project (OCP) Global Summit in San Jose this week to announce major customer wins and outline its vision for the future of the datacentre, securing both Meta and Oracle Cloud as customers for its Spectrum-X networking fabric.

The announcements are part of Nvidia’s broader moves to architect the entire datacentre stack through a combination of proprietary innovation and open collaborative standards to drive the next wave of artificial intelligence (AI) developments.

“AI demand is exploding,” said Joe DeLaere, Nvidia’s senior manager for accelerated computing solutions, during a media briefing ahead of the event. “Datacentres are evolving toward gigawatt-scale AI factories that manufacture intelligence and generate revenue, and to maximise that revenue, networking, compute, mechanicals, power and cooling must be designed as one.”

In networking, Nvidia touted Spectrum-X’s ability to support trillion-parameter AI models while enabling hyperscalers to meet the demands of distributed computing. Gilad Shainer, senior vice-president for networking at Nvidia, said the technology can support the needs of both large and small AI datacentres, enabling them to “achieve the highest levels of performance for AI workloads”.

Notably, Meta will integrate Spectrum-X Ethernet switches into its networking infrastructure for the Facebook Open Switching System (FBOSS), its software platform for managing network switches at scale, to create a unified network for both AI and non-AI workloads.

“Meta’s next-generation AI infrastructure requires open and efficient networking at a scale the industry has never seen before,” said Gaya Nagarajan, vice-president of networking engineering at Meta. “By integrating Nvidia Spectrum-X Ethernet into the Minipack3N switch and FBOSS, we can extend our open networking approach while unlocking the efficiency and predictability needed to train ever-larger models.”

Oracle Cloud is also deploying Spectrum-X Ethernet for its gigawatt-scale AI factories, including the Stargate datacentre to be built as part of a $500m high-profile initiative to shore up AI innovation in the US. According to Nvidia, Spectrum-X is the only Ethernet technology proven to sustain 95% throughput with no latency degradation in massive graphics processing unit (GPU) clusters.

Nvidia also provided new performance metrics for its current-generation Blackwell architecture. Citing a new open-source benchmark, InferenceMax, DeLaere said Blackwell delivered a “remarkable 15x improvement over Hopper.” He put this into a return-on-investment context, stating, “For a $5m investment in capital expenditure and operating expenditure, Blackwell’s performance can generate $75m in token revenue, a 15x return on investment.”

Looking ahead, Nvidia gave further details on its next-generation Vera Rubin superchip, the successor to Blackwell slated for commercial availability in the second half of 2026. Built on the same open MGX rack footprint as Blackwell, it is projected to deliver eight exaflops of performance in NVFP4, a new AI-optimised data format that makes it faster to train AI models.

Perhaps the most fundamental change is Nvidia’s push to shift the industry from legacy 415-volt AC (alternating current) power to an 800-volt DC (direct current) architecture, a standard already being adopted by the electric vehicle and solar industries. “By moving power conversion upstream and delivering DC power to look directly to the racks, we simplify the design and increase efficiency,” DeLaere said. “This efficiency increase means we can have more GPUs per AI factory.”

Nvidia is collaborating with more than 20 industry leaders on the transition, including power component suppliers like onsemi, rack power providers like Delta and LiteOn, and datacentre infrastructure firms like Schneider Electric and Siemens. Foxconn’s new 40-megawatt facility in Taiwan is already being built to this 800-volt DC standard, specifically to support Nvidia’s future rack architectures.

As part of an expanded partnership, Intel will also build x86 CPUs that connect to Nvidia GPUs via the NVLink interconnect, while Samsung Foundry has teamed up with Nvidia to meet the growing demand for custom CPUs and extended processing units (XPUs) by offering its design and manufacturing expertise for custom silicon.

Read more about IT in APAC

Read more on Data centre networking