Rokas - stock.adobe.com

Nutanix GPT-in-a-Box aims hyper-converged at AI/ML use cases

Nutanix targets a pre-configured bundle of AI/ML and GPT software with hyper-converged infrastructure and GPT to help organisations safely take advantage of learning networks

Nutanix has launched GPT-in-a-Box, a bundled service that adds artificial intelligence (AI) software stack elements – such as foundation models and AI frameworks – to scale-out hyper-converged infrastructure (HCI).

GPT-in-a-Box also offers consulting so that customers can specify the right infrastructure configuration, in terms of hardware – for example, GPU spec – and software, such as AI components.

Nutanix will aim the initial launch squarely at customer on-premise use cases, but including edge workloads, with expansion to the cloud coming later.

Essentially, Nutanix believes customers need help to specify an infrastructure for AI because it can involve a complex mix of software elements plus hardware add-ons, and that concerns are commonplace over privacy and governance in AI applications.

“It’s activity that consumes, creates and generates a lot of data,” said Nutanix senior vice-president for product management Thomas Cornely. “And discussion about what you can do on-premise often resolves around privacy and governance.”

Nutanix will offer what it calls a “full-stack AI-ready platform”, in which it expects customers to deploy hardware and software to train and retrain models and be able to expose results to application developers.

GPT-in-a-Box bundles will comprise Nutanix HCI, Nvidia GPU hardware or recommendations, the Nutanix AHV hypervisor, a Kubernetes container layer, AI foundation models, open-source AI frameworks that could include KubeFlow, Jupiter and PyTorch, and a curated set of large language models including Llama2, Falcom GPT and MosaicML, all of which will provide outputs exposed for application development.

Read more on storage for AI

  • Podcast: Sizing data storage for AI/ML workloads. Artificial intelligence and machine learning workloads come with I/O profiles of different shapes and sizes. We talk to Curtis Anderson of Panasas about how best to procure and size storage for them.
  • Storage requirements for AI, ML and analytics. We look at what is needed for artificial intelligence and machine learning, and the pros and cons of block, file and object storage to store and access very large amounts of often unstructured data.

Nutanix’s offering is the latest effort from storage array makers to target AI/ML use cases, and clearly aims to hook on the surge in interest in chat-format AI. All of the big storage makers have addressed the rise in prominence of unstructured data as a source of analytics processing, but not all have been so explicit in targeting product bundles. An exception is Vast Data, which wants to build its recently launched Vast Data Platform as a global brain-like network of AI learning nodes.

Meanwhile, Nutanix GPT-in-a-Box is not just a self-service deploy-and-run offer. “It’s a bundled offer and it can scale down and out,” said Cornely. “But there’s a consulting phase, on GPUs, for example, and the software elements needed to support customer requirements.”

It’s an offer primarily launched at greenfield deployments in core datacentre or edge locations. Existing Nutanix customers can, in theory, build AI-ready infrastructures but would still need to consult over, for example, GPU sizing. “They do need different components,” said Cornely.

“They could upgrade their own infrastructure, but many customers lack the time to get started,” he said. “And there are different components for different parts of the [machine learning] process. There’s quite a lot of consulting up-front, but Nutanix has people that are chairs and vice-chairs of organisations that are putting this stuff out so they can say, ‘This is what’s needed for this deployment’.”

According to Cornely, many customers lack policies for the data that’s going into models and where it goes after it comes out, so for the time being, this offer is aimed at deployments on-premise to simplify matters of privacy, copyright and governance.

“It’s clearly targeted at on-premise and edge, and allowing customers to be fully in control of what they’re paying for and what data is going into it,” he said. “The cloud element is limited to getting foundation models, registering for LLMs, etc.”

Next Steps

Why IT leaders should deploy generative AI infrastructure now

Read more on Converged infrastructure

Search CIO
Search Security
Search Networking
Search Data Center
Search Data Management
Close