Arctic University warms classrooms with waste datacentre heat

case study

Arctic University warms classrooms with waste datacentre heat

Archana Venkatraman
Ezine

This article can also be found in the Premium Editorial Download "CWEurope: CW Europe - August 2014 Edition."

Download it now to read this article plus other related content.

Aiming to run the greenest datacentre in the world in one of the greenest countries on the planet – 98% of Norwegian power comes from renewable sources – the IT team at the Arctic University of Norway in Tromso is set to tap even more of the excess heat generated by its datacentre to warm the university's campus 350km within the Arctic Circle.

“Tromso is 70° north and this means we need heating almost all year round, including in July,” said Roy Dragseth, team leader for high-performance computing (HPC) at the university.

10. Norway (last year: 14)

According to the IT team, water-cooled IT systems are much more efficient than air-cooled technologies in capturing datacentre heat. “Liquid cooling is much more efficient than air-cooling as water is 1,000 times better at transporting heat,” Dragseth said.

The university currently has datacentre systems that are only partially water-cooled or fully air-cooled. But as it is moving to a newer datacentre in a few months, Dragseth aims to bring more water-cooling into the datacentre power mix

“Eventually, we want to make our datacentre 100% water-cooled because it is so much more efficient,” Dragseth said.

We want to make our datacentre 100% water-cooled because it is so much more efficient

Roy Dragseth

In addition to creating possibly the greenest datacentre in the world as a result, the IT team has to provide a robust HPC environment for its researchers.  

Tromso’s location in the far north of Noway makes it very suitable for reading satellite data from polar orbit satellites. “Almost every facet of the satellites is visible from Tromso and we have a large researcher network downloading data from them,” said Dragseth

This data and research work is very important as it deals with potential climate and environmental impact, he added. “We wanted to make sure that technology is not an inhibitor for research. There should always be enough compute resources so people don’t have to wait for their jobs to start and their research paper to be published.

“Over the years, we have been increasing the compute density and we are doing incredibly power-intensive computing at the moment.” 

The university’s IT team has implemented HP’s Apollo 8000 HPC system, which uses “warm-liquid cooling technology”, in its existing datacentre.

Aimed at the high-end market, the system offers performance peaks of over 250 teraflops per rack and each rack can hold 144 servers. The company launched the product at its annual Discover event in Las Vegas in June. 

“It offers up to four times the teraflops per square foot and up to 40% more FLOPs per watt than comparable air-cooled servers,” said John Gromala, HP director for modular systems.

HP’s entirely warm water-cooled server system uses a patented technology called “dry disconnect” where cool water is run through sealed copper pipes around the processor, with vacuum pumps creating and maintaining pressure in the tubes and connecting each server’s cooling system to the rest. Datacentre managers can add or remove individual server trays from the unit without water leaking or shutting off the coolant to the rest of the equipment.

Scott Misage, director for HPC at HP, said: “Water cooling may be a common approach, but we are the only ones currently using a warm water cooling approach.”

In the new 2MW datacentre, we expect to re-use 1.6MW of heat energy produced by the systems and that is €1.7m of energy saved every year

Roy Dragseth

But why the HP Apollo 8000? “It gives us everything we want – liquid cooling equipment and HPC environment,” Dragseth said. “Many other systems we looked at had partial liquid-cooling and depended on ambient air-cooling.”

Ambient air cooling uses cool air from the atmosphere to reduce heat inside a facility. Although naturally cool air is free as well as green, datacentre managers have to set up filters to prevent the dust in the air entering the systems, and the ambient air must be very cold at all times.

“With Apollo, we don’t have to make provision for any air-cooling,” Dragseth said.

The university’s IT team has installed two separate pipes – one to transport cold water to cool the systems and another to carry the heated water away. “Both the loops are separate and there is a heat exchanger that helps us bring back hot water,” Dragseth said. “Most times, this water is between 40 and 50°C and it is very useful to heat the buildings.

“We see this as an opportunity to move to the next level of an even greener compute set-up where we will be able to re-use the excess heat from the systems to keep our students warm.” 

He added that energy efficiency and HPC are not the only benefits of the technology. Having a system where all the elements – compute, cooling and networking, monitoring and maintenance systems – are integrated into a dense package means the IT team spends less time managing the system. “And that is really important to us.”

The university is readying its new 2MW datacentre. “We will migrate the existing HP system and add more systems in our new datacentre,” Dragseth said. “In the new 2MW datacentre, we expect to re-use 1.6MW of heat energy produced by the systems and that is €1.7m of energy saved every year.

Our computing environment is as busy on Christmas Eve as it is on a summer Monday, so cloud isn’t very attractive for our use case

Roy Dragseth

“In addition, water cooling itself will save is between 10 and 17% on datacentre input-power. This translates into direct budget savings.” 

But why did the university’s IT team not consider the cloud for HPC? 

“Yes, there are huge benefits of using the cloud for HPC but cloud providers charge on a use per hour basis,” Dragseth explained.

Currently, each of the university’s Apollo racks contains 144 servers and 15,000 compute nodes and produces 300+ teraflops of performance output.

“But we don’t have peaks and valleys in our compute use,” said Dragseth. “We have continuous high load on the system. Our computing environment is as busy on Christmas Eve as it is on a summer Monday, so cloud isn’t very attractive for our use case.”


Email Alerts

Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
 

COMMENTS powered by Disqus  //  Commenting policy