AI: Killer app or planet killer

Looking at the latest quarterly earnings filings from the leading server makers shows they are all experiencing a significant boost in sales, thanks to artificial intelligence (AI). HPE CEO, Antonio Neri describes AI as one of the growth engines for the company. Lenovo’s CEO, Yuanqing Yang, talks about “clear signs of recovery across the tech sector” driven by hybrid AI applications. And Dell’s chief operating officer, Jeffery Clarke, says: “AI continues to dominate the technology in business conversation.”

What these companies are seeing is the result of AI moving beyond hyperscalers to quickly becoming something every organisation is considering. The fact that AI-optimised server sales is ramping up, shows that organisations are prepared to invest in the technology to support AI workloads like company-specific large language models that cannot be run on the public cloud IT infrastructure.

AI has become the killer application for the server providers. It is something they desperately need given that x86 servers are powerful enough to run the majority of enterprise workloads. If the hyperscalers are scaling back their x86 server refreshes, why shouldn’t corporate IT do so.

GPUs in the datacentre

While most applications can easily run on existing x86 server hardware, AI workloads require graphics processing units (GPUs). In the past, GPUs were only used in specialist accelerators for high performance computing (HPC), but AI has made HPC mainstream. Datacentre admins would not have ever seen the need to fit GPU cards in rack-mounted servers; PCs need them for games and graphics -intensive processing. But servers do not even have a display attached so there is little need for graphics cards and GPU acceleration.

That has now changed. AI has put GPUs in the datacentre and hundreds, if not thousands, will be needed to be configured, to run machine learning and AI inference workloads. This changes the make-up of the datacentre environment. Power and cooling used to be an issue but x86 servers became more efficient and the power problem became more manageable. But GPUs are power-hungry. Just look at a high-end gaming PC from Alienware. The Aurora R15 is configured with a 1350w power supply.

Some spec sheets for current AI-optimised servers show power supplies up to 2500w per server. Compare this to the 750w power supply in a single rack x86 server. Will existing datacentres meet the greater power demands?

While GPUs may indeed offer better performance per watt compared to x86 servers doing the same job, the bigger issue is that AI is a power hungry workload. If AI-optimised server sales continue to accelerate, and running AI workloads in datacentres becomes mainstream, this will have an environmental impact. Now’s the time for everyone to consider AI’s carbon footprint.

CIO
Security
Networking
Data Center
Data Management
Close