kirill_makarov - stock.adobe.com

The impact of generative AI on the datacentre

While artificial intelligence will not live up to its name any time soon, mass adoption of large language models, whether by customers or in-house, requires thinking about by enterprise IT leaders

This article can also be found in the Premium Editorial Download: Computer Weekly: We’re doing AI all wrong

Questions remain about the potential impact on datacentres from generative artificial intelligence (AI) adoption, even when it comes to the need for more processing, storage and power. One thing for certain is that there will be an impact.

Slawomir Dziedziula, application engineering director at Vertiv, warns that no one has fully calculated power consumption for individual applications. So, how so many requests will specifically affect software and hardware requirements remains uncertain.

“It’s still early days to say precisely,” he agrees, pointing out that countries that banned crypto mining had similar concerns about infrastructure impacts and sustainability.

“One side is how much you can trust generative AI, although you can definitely use it to enhance your knowledge and also your skills,” Dziedziula says.

“The other thing is you need many servers, GPUs, data storage devices and so on, and then your engineers. If they’re using value scripts for use in applications, they’ll need customisation.”

It can already be difficult to pinpoint use of a large language model (LLM). Experienced programmers use generative AI to come up with fresh ideas and perspectives – yet some may not spot objectively poor results, he notes.

“Everyone can believe they’re really good at something by using generative AI,” Dziedziula points out.

Working with generative AI entails a “tremendous” lot of verification. Skillsets and new applications may be required. Cyber security pressures may intensify too. ChatGPT can produce vast volumes of believable phishing emails, for example.

“There will be increased dependency on skilled workers,” Dziedziula warns. “Yet instead of 10 people, I need just two people and smart software to do the rest.”

Chris Anley, chief scientist at IT security, assurance and software escrow provider NCC Group, says the datacentre may need a fresh look at resource consumption, infrastructure management and security.

Emerging network infrastructures, architectures, data storage and retrieval models will need to be secured, so the impacts are not simply about scale and capacity. Provisioning in new ways will entail internet scale distributed storage mechanisms, going beyond relational databases to achieve the throughput for training of AI and machine learning (ML) systems.

“You can’t just have a single cluster doing it; you’ve got to spread the load between lots of GPUs,” Anley says. “New requirements will change datacentres, from cooling and power to the physical and logical structure of networks. A datacentre optimised for AI can look very different to one optimised for typical corporate operations.”

Datacentres already in adaptation mode 

Yet ML tools have been gradually penetrating the market for years despite “alarmist media hype about generative AI eating the world”, notes Anley.

He confirms using ChatGPT for security code review. However, while it can help pinpoint or triage issues, he feels the results aren’t entirely trustworthy. “It can invent facts, either missing bugs completely, just focusing on something else, or ‘hallucinates’ fictional bugs. Both are bad for security.”

He hastens to add that mostly there is little threat from this. Programmers in need of generative AI to code aren’t typically going to be working on critical corporate applications. Also, although “subtle bugs” do happen, bad code is usually immediately apparent because it just does not do what you want.

“Code isn’t one of those things where it can be ‘mostly right’ like a song or a theatrical production or a piece of prose or whatever,” Anley says.

Generative AI is likely to remain mainly about making skilled staff more efficient and productive. Even a 10% productivity improvement can slash cost at an organisational level, he says.

Generative AI is already “good at the small stuff”, such as library code where a programmer might not be quite familiar with the library, does not know the name of the specific function in that library, or for certain technical tasks such as converting data from one format to another.

“It’ll autocomplete something, saving you a trip to the web browser or the documentation,” Anley continues. “I think most of our customers are now using AI in one form or another, whether for customer support, chatbots, or just optimising internal processes.”

However, with complex AI or ML development and hosting technologies pushed into corporate networks, caution is required. For instance, aggregating lots of training data across security boundaries can remove important controls on what can be “seen”.

Training data can be retrieved from trained models simply by querying them, using attacks such as membership inference and model inversion. “The result is a situation similar to the familiar SQL injection data breach attacks.”

He notes that at least one supplier recently banned generative AI because developers were adding sensitive corporate code into a third-party policy engine just to help them write. Yet not doing this should be common sense, and many firms already have policies forbidding code-sharing with third parties.

Matt Hervey, partner and head of AI law at Gowling WLG, says that while it’s still tough to train these models to generate and categorise data perfectly, the quality “looks to have jumped up dramatically” in the past six to 12 months. With ML techniques are being baked into standard tools, “profound impacts” can be expected, but these may mostly represent business opportunity.

“I suspect this is good news for the datacentre business...and there are movements to achieve similar results with smaller training sets,” Hervey says.

However, certain “bad activity” may end up in the private space, he adds, and questions remain as to whether datacentres will be entirely shielded when it comes to legal risk.

With a massive rise in ML use entailing ramp-ups in processing and power beyond what has been previously seen, some will also be moving cloud applications or services to the edge. On-board processing on mobile phones for example presents potential for privacy or other regulatory compliance issues.

Views on “the economic value” of certain activities or roles is set to change, with some areas or activities becoming more or less cost-effective, rippling across various industries and sectors including in datacentres, Hervey says.

Jocelyn Paulley, partner and co-head of UK retail, data protection and cyber security sectors at Gowling WLG, adds that datacentre expansions and connectivity where there are already capacity issues, such as London, could add a challenge, but are perhaps soluble with infrastructure and cooling rethinks and increased server densities.

Datacentres can avoid content-related risk

Careless or non-compliant customer use of ChatGPT, for example, will not affect colocation providers with zero access to customer software and environments that do not host applications or other people’s content – and where that can be an issue, legislation is already evolving, Paulley says.

Jaco Vermeulen, chief tech officer at consultancy BML Digital, points out that generative AI does not really do anything more advanced than search, which means brute-force in terms of cyber attack. While LLMs might require greater human intervention in interpretation or joining up certain factors in analysis, for example, the latest AI iteration is “not really a threat in itself”.

“It needs to be directed first and then validated,” he says.

Datacentre access already requires physical, biometric or “possibly double biometric” identification, plus a second party. Two people are typically needed to access a building, each with three elements of identification and then verification.

For AI to extract all of that, it needs a lot of access to personal information, which is just not available on the internet – and if it’s drawing data it’s not meant to access, that’s down to the organisations and individuals using it, says Vermeulen.

Using more complex prompts to achieve greater sophistication will only result in responses “failing more miserably...because it’s going to try to give you actual intelligence without real context on how to apply it. It’s only got a narrowband focus,” Vermeulen says.

“You’re going to have bad or lazy actors any place. This machine does not go beyond the box. And if in future it does turn into Skynet, let’s unplug it.”

Further, Vermeulen says most agents will be deployed where an organisation has full control over it. He also pours water on the need for any unique datacentre-related proposition.

“Generative AI is mostly more of the same, unless there’s a real business case in actual product,” Vermeulen says. “It’s just pattern recognition with output that picks up variations. The commercial model will remain about consumption, support and capacity.”

Rob Farrow, head of engineering at Profusion, adds that most AI models simply retrain on the same inputs to produce their models. Although developments – such as an ability to self-architect – could make AI enough of a threat to require some failsafe or kill switch option, this seems unlikely within about 10 years.

“There’s no real valid level of complexity or anything even like human intelligence,” Farrow points out. “There’s a whole bunch of technical problems. When it does happen, we need to think about it.”

That brings us back to the computational expense of running ML. Further uncertainties remain, stemming from increased software complexity, for instance, so more things can go wrong. That suggests value in working on developing transparency of the software and how it operates or makes decisions.

Writing less code and simplifying where possible can help, but platforms for this often do not supply enough nuance, Farrow says.

While warning against organisations leaping into generative AI or ML projects without sufficiently strong data foundations, he suggests that the impacts on power, processing and storage might be countered by using AI or ML to develop greater predictability, achieving savings across systems.

“Some Amazon datacentres have solar panels with thousands of batteries, making huge amounts of heat, but actually using ML to take solar energy based on circadian rhythms,” he says.

But a lot of businesses jump the gun, chasing an AI or ML model they want. You are building a house on sand if you cannot retrain it, you cannot go and get new data, you have no visibility, and you cannot audit it. It might work for a short time and then fail, Farrow warns.

Read more about generative AI

Read more on Datacentre systems management

CIO
Security
Networking
Data Center
Data Management
Close