www.marseille-port.fr
ITER builds global, high-speed data backbone for remote scientific participation
Based in southern France, ITER is building one of the most advanced scientific data networks in Europe to support high-speed, resilient connections with fusion researchers worldwide
ITER, the international mega-project constructing the world’s largest experimental fusion reactor, expects thousands of scientists from partner countries to participate in experiments once the machine begins operating in the 2030s – but ITER is regulated like a nuclear fission facility, which means remote control of plant systems is strictly prohibited.
While scientists abroad cannot log in to plant networks or manipulate equipment from afar, what they can do is nearly as powerful. They can receive data from ITER in near-real time, run analysis codes locally, and feed their results back to operators during an experiment.
According to Denis Stepanov, computing coordinating engineer at ITER, the goal is to let remote researchers “quickly get this data and work on it, produce an algorithmic result within minutes or seconds, and steer what happens through the operators onsite”.
Achieving this requires far more than upgrading a few network switches. ITER will produce extraordinary volumes of data – from fast diagnostics, power-grid systems, cryogenics, vacuum instrumentation and hundreds of high-speed sensors. Pulses reaching 400 seconds can generate tens of terabytes of data per shot – and the long-term plan is to run pulses of up to an hour on a regular basis. A recent simulated dataset used for testing the network reached 176TB.
To serve this data to researchers thousands of kilometres away – in under a minute, reliably and across multiple continents – ITER built a global data backbone stretching from southern France to California and northern Japan.
ITER’s data strategy revolves around two highly connected sites: the Scientific Data and Computing Centre (SDCC) located at the ITER site in Cadarache, and a secondary archive and distribution hub in Marseille, about 50km away. Marseille is one of Europe’s most important connectivity hubs, with dense links to international carriers and pan-European research network Géant.
According to Peter Kroul, computing centre officer at ITER, the Marseille facility will ultimately support both data protection and distribution. It’s also where ITER connects to international research networks including Renater (France), Géant (Europe), ESnet (US), and Sinet (Japan).
Read more about ITER
- When commercial AI meets the world’s most ambitious science experiment, nuclear fusion, surprising things start to happen.
- Information technology remains a key enabler for nuclear fusion.
ITER installed two redundant 400-Gbps optical links between Cadarache and Marseille in 2022. Japan’s Sinet has upgraded most domestic and trans-Pacific capacity to 100–400 Gbps, and Europe’s Géant already delivers 100Gbps bandwidth across the continent. With these upgrades in place, ITER could – for the first time – test 100Gbps capability all the way from France to Japan.
Another significant engineering contribution is ITER.sync, a high-performance, open source-based data replication framework developed at the SDCC. It parallelises data streams, auto-tunes network parameters and maintains high throughput even across 200–300-millisecond-latency paths.
Kroul explains that ITER.sync was designed because ITER cannot impose a specific supplier or technology stack on partner organisations. “We needed something that could connect to whatever systems our partners use, without sacrificing performance on very high-latency links,” he says.
Kroul recounts that parts of ITER.sync originated unexpectedly. While tuning internal systems, engineers discovered a set of techniques that “almost by chance” proved capable of saturating a long-distance link. Those methods eventually became central to ITER.sync’s design.
But Stepanov stresses that the hardest part is not laying fibre or developing protocols – it’s aligning all the organisations involved. ITER’s transcontinental links rely on Layer-2 VPNs, which require coordinated configuration changes across every carrier along the route. “There are many actors involved,” he says. “You need technical setup and mutual trust. The administrative part takes as much time as constructing the technical part.”
Countries will ultimately choose how to participate. Many plan to analyse data remotely at ITER’s computing centre. But some, including Japan and the US, want their own local copies.
Japan’s Remote Experimentation Centre (REC) in Rokkasho sits next to one of the country’s supercomputers. Their aim is to receive the data from ITER, run compute-intensive algorithms locally, and distribute subsets to domestic research groups without repeatedly pulling data from France.
The US has similar ambitions. DIII-D, operated by General Atomics, regularly transfers data across the country for near-real-time analysis and demonstrates results in minutes. ITER’s architecture is designed to support such use cases on a global scale.
Completing two successful tests this summer
In summer 2025, ITER, Japan’s REC, and DIII-D conducted two independent large-scale data transfer campaigns. Both relied on simulated ITER data representing early and full-power operations.
The France-Japan tests upgraded a 10Gbps demonstration from 2016 to two simultaneous 100Gbps connections – a 20-fold improvement. Engineers evaluated multi-path routing, continuous throughput and adaptability in case of submarine cable failures.
The “primary” link previously transited Siberia, but geopolitical constraints forced Japan to reroute traffic across the Pacific or around Southeast Asia. “The first variant of the link went through Siberia, which had the best performance,” says Stepanov. “They had to cancel it and go the long way around – through Singapore or through the US – which increased latency by about 50%.”
To bolster resilience, Géant provisioned a secondary Mediterranean-Red Sea-Indian Ocean path. The Japan campaign tested both failover behaviour and the possibility of distributing traffic across the two routes simultaneously.
ITER’s campaign with DIII-D in California focused on high-throughput integration between different storage architectures. Over 10 runs, the teams achieved throughput close to the theoretical 100Gbps limit of the trans-Atlantic link. Storage systems on both ends – IBM Spectrum Scale at ITER and BeeGFS in the US – were shown to interoperate effectively.
As ITER ramps towards commissioning, the networking demands will intensify. Early pulses will generate modest data volumes, but long pulses will push the full multi-terabyte dataset for every shot. National labs will need to scale up their own infrastructure accordingly.
Kroul says the 2025 data-transfer campaigns demonstrate that ITER’s architecture is ready. With current technology, the global network can already sustain the required throughput for both online and offline analysis workflows during ITER’s first decade of operation. Whether partners request full data copies or rely on centralised remote analysis, the backbone is in place.
The next steps include refining the data-distribution strategy, coordinating backup policies, and continuing to expand the international network agreements that make long-distance Layer-2 VPNs possible. “Taken together – hardware, software and network performance – the results are satisfactory,” says Stepanov. “The links are sufficient for ITER commissioning and for the first five to 10 years of experiments.”
