canjoena - stock.adobe.com

Schneider Electric CTO tips liquid cooling for take-off as machine learning takes over datacentre

Witn machine learning fuelling demand for graphics processing units in the datacentre, Schneider Electric CTO claims this will require a drastic rethink of how operators manage their sites, paving the way for liquid cooling to hit mainstream adoption

The growing demand for machine learning technologies could pave the way for the use of liquid cooling to become more prevalent in datacentres, says Schneider Electric CTO Kevin Brown.

Speaking at the Datacloud Europe Congress in Monaco, Brown said datacentre operators have been slow to adopt liquid cooling to keep the servers in the facilities at optimum temperature, despite it being repeatedly hyped as “the next big thing” for years.

Various approaches to liquid cooling have been proposed and embarked on by hardware manufacturers over the years, with some favouring the use of servers that are fully immersed in non-toxic, non-conductive liquid coolant, with the heat from the equipment pumped away from the equipment via water pipes and pumps, for example.

“Throughout history, the industry has been predicting that liquid cooling is coming. So far, it’s fair to say it’s not hit mainstream adoption. Five years ago, if I was standing on this stage, I would be saying: liquid cooling is the technology of the future, and it always will be,” he said.

However, with machine learning practitioners increasingly favouring graphic processing units (GPUs) over central processing units (CPUs) to train neural networks, industry attitudes to liquid cooling could be set to change.

“That [trend] is clearly starting to find its way into the datacentre. And the reality is a GPU has a much bigger power profile than a CPU,” he added.

In a post-keynote interview with Computer Weekly, Brown said if the use of GPUs become more commonplace in the datacentre, the much higher amounts of power they consume will require operators to rethink how they go about keeping the servers in the facilities cool.

“Research is showing Intel’s chips are going beyond that 150 watt barrier we’ve traditionally seen, and if that’s really happening, then you might begin to see this movement where all the chips start consuming more power and you run into some very difficult engineering challenges trying to get the airflow through the server itself when you have those kinds of chip densities,” he said.

This, in turn, may prompt operators to reconsider whether liquid cooling could be the answer to this cooling conundrum, given the technology has previously proven to be more efficient than traditional air-cooling techniques.

“The liquid cooling argument in the past has always been that it is more efficient, if you get rid of the fans and everything else, and when we [Schneider] have done a total cost of ownership analysis, the models have always come out better, but the free market decided no,” he said.

Read more about datacentre power and cooling trends

One of the big reasons for that is because adopting liquid cooling is likely to require a massive overhaul of how operators and hardware suppliers approach datacentre design.

“In the past, people have over-simplified what it really takes to adopt liquid cooling. You have to rethink your server design, your datacentre architecture and almost everything you’re doing, and that will require a level of coordination [between the groups responsible] to really make this work,” he said.

Motherboard design is one area the IT hardware suppliers will need to revisit if liquid cooling is to take-off, for example, which may require manufacturers to adjust their existing designs to accommodate liquid cooling or create new ones instead.

“In our opinion, those stars are starting to align, and it is making it more likely than it was in the past that liquid cooling will take off,” he added.

Aside from needing to accommodate the use of more power-hungry chips, the datacentre industry’s power consumption habits are coming under increasingly close scrutiny overall, said Brown, putting pressure on operators to find new ways to increase the energy efficiency of their sites.

Over the course of the past decade, this has prompted operators to upgrade their uninterruptible power supplies (UPS) for more efficient models, introduce hot and cold aisle containment in their sites and adopt air-cooling, which has resulted in huge energy efficiency improvements.

“You can see there is about an 80% reduction in losses that this industry delivered in 10 years, following those approaches,” said Brown. “As we continue to look to the future, the question is where is the next 80% going to come from, because the tools and the approaches we’ve been using for the last 10 years will not work.”

The amount of power datacentres draw from the national grid has emerged as a sticking point recently in Apple’s abortive bid to build a datacentre in on the west coast of Ireland, and is regularly cited as a concern by environmentalists who fear the industry’s growth could contribute to power outages.

Another issue to bear in mind, added Brown, is that operators are now being encouraged to look for new growth opportunities in emerging markets, where power supply issues maybe even more acute than they are in the UK and other more developed markets.

“Why would a government put in a medium voltage substation to power a datacentre, when there [might] be people there that haven’t been electrified yet? It’s kind of a moral dilemma we’re going to start facing,” he added.

“The bigger Schneider story is how do you go and solve that problem, because if the big datacentre providers can come in and put in a datacentre and get the utility grid in an area where some of the population does not have access to electricity, that is a conflict.”

Read more on Datacentre energy efficiency and green IT

CIO
Security
Networking
Data Center
Data Management
Close