Nutanix VP: Next steps in cloud-native & AI infrastructure
During KubeCon + CloudNativeCon 2026 Europe this year, the Computer Weekly Developer Network (CWDN) sat down with Dan Ciruli, vice president and general manager for cloud-native technologies at Nutanix.
Always effortlessly effusive on all matters cloud and everything that happens between the user interface and the base infrastructure substrate that serves iit, Ciruli is mindful of the fact that, today, every vendor is positioning themselves as an “AI-ready platform” or “unified substrate” – so we asked him why he thinks Nutanix should be taken seriously in this space?
Ciruli: A lot of vendors are looking at the same problem from different angles. What’s genuinely different about Nutanix is that we bring together three things that are usually fragmented – enterprise-grade distributed storage, a proven virtualisation layer and a full Kubernetes platform.
Our distributed storage has been in production for over a decade and was originally proven at scale through virtual desktop workloads, where performance really matters. That same architecture turns out to be extremely well-suited to modern AI and containerised applications. When you combine that with enterprise virtualisation and Kubernetes, you get a platform that can consistently run applications across environments – and that’s still quite unique in the market.
CWDN: What’s the real value of distributed storage in this context – is it about tiering data or something else?
Ciruli: The key principle is actually about co-locating compute with data. That’s been true for a long time in distributed systems, but it becomes even more critical with AI and GPUs.
Intelligently placed compute
In a distributed environment, we can place workloads exactly where the data lives, reducing latency and improving performance. That was important in the VDI days, but it’s even more important now when you’re dealing with large datasets and high-performance compute.
So it’s less about moving data around… and more about intelligently placing compute where it needs to be.
CWDN: Does that (intelligent placing) extend to edge use cases – for example, sensors that run as a part of real-time systems?
Ciruli: Absolutely. At the edge, the requirement is even more immediate. You often can’t send data back to a central location, process it and return a result — it needs to happen locally.
Whether it’s a manufacturing line inspecting parts in real time, or retail systems using computer vision at checkout, you need to bring the application to where the data is generated. That means being able to run containers or VMs directly at the edge with the same operational model you use in the data centre or cloud.
CWDN: How does Nutanix approach AI infrastructure specifically?
Ciruli: We think about it as providing the substrate for AI applications. That means not just Kubernetes, but the broader set of tools those applications need – things like model serving frameworks, AI gateways and supporting services.
The goal is to make sure developers can build and run AI applications consistently, whether that’s in the cloud, on-premises, or at the edge, without having to rethink the underlying platform each time.
Open source, of course
CWDN: Where does open source and the cloud-native ecosystem fit into this?
Nutanix cloud-native VP: Ciruli: All new applications are being built as cloud-native applications
Ciruli: The scale of innovation in the cloud-native ecosystem is unprecedented. With hundreds of thousands of contributors, it’s arguably the largest collaborative software effort we’ve ever seen.
What’s made it successful is trust – organisations know they’re not locked into a single vendor and they can adopt and evolve technologies safely. That’s a big part of why Kubernetes has become the standard way to deploy modern applications.
CWDN: How are AI and cloud-native technologies converging?
Ciruli: It’s actually quite straightforward – all new applications are being built as cloud-native applications and that means containers and Kubernetes. Agentic AI applications are no different. They’re new applications, so they’re being built to run in containers from the start. Kubernetes becomes the default deployment model.
CWDN: Kubernetes is powerful, but also comple – so does AI make that better or worse?
Ciruli: Kubernetes is complex, particularly when it comes to debugging distributed systems. But AI has the potential to significantly simplify operations. We’re moving toward a model where the system can understand intent — expressed in natural language – and translate that into the configurations needed to run an application. More importantly, it can observe and diagnose itself by correlating signals across the environment.
Over time, that intelligence will reduce the operational burden and make these systems much easier to manage.
CWDN: Does that mean developers and operators become less important?
Ciruli: No, not at all – their roles evolve. AI will take on more of the pattern recognition and correlation work, but humans remain critical.
People, productivity
We’ll see increased productivity, not replacement. The system becomes better at monitoring and analysing, but people are still needed to guide, validate and make decisions.
CWDN: Finally, with all this focus on containers, are virtual machines becoming obsolete?
Ciruli: Not at all. If anything, we expect a hybrid reality for a long time.
Refactoring existing applications to run as cloud-native isn’t always worth the effort if they’re already doing their job. Many of those workloads will stay in VMs for years.
So the real challenge is running both worlds together – existing VM-based applications and new container-based ones – on a single platform. That’s exactly the problem we’re focused on solving.
