CNCF: Kubernetes now de facto ‘operating system’ for AI
The Cloud Native Computing Foundation 2025 Annual Survey suggests that Kubernetes has solidified its role as the ‘operating system’ for AI, with 82% of container users now running Kubernetes in production.
To be clear, this is use of the term operating system in a marginally skewed sense i.e. as a system for operations, rather than as an OS as in Linux, Windows, OS X, Android and so on (Infor did this with part of its ERP platform, which was arguably more confusing, it makes sense here), which readers will have likely already worked out.
The organisation says that findings illustrate how Kubernetes has become the common denominator for cloud-native scale and stability, especially as organisations bring AI workloads into production environments.
Kubernetes has evolved beyond orchestration to become the backbone of enterprise infrastructure. Its role in scaling AI workloads demonstrates how integral it has become to modern production environments.
As we know, Cloud-native technologies are core to production environments, especially as companies look to operationalise AI.
“Over the past decade, Kubernetes has become the foundation of modern infrastructure,” said Jonathan Bryce, executive director of CNCF. “Now, as AI and cloud native converge, we’re entering a new chapter. Kubernetes isn’t just scaling applications; it’s becoming the platform for intelligent systems. This community has the expertise to shape how AI runs at scale and we have a massive opportunity to build something open, powerful and impactful for the next ten years.”
Infrastructure maturity, near-universal
Some 98% of surveyed organisations reported that they have adopted cloud-native techniques, demonstrating how the technology has clearly moved beyond the “early adopter” phase and is establishing itself as the enterprise standard for deploying and managing modern applications at scale.
This shift reflects increased confidence in Kubernetes and related tools, with most organisations now treating cloud native approaches as foundational rather than experimental.
Some key facts are worth highlighting here:
- Production Kubernetes usage has surged: 82% of container users now run Kubernetes in production, up from 66% in 2023.
- Cloud native practices are the norm: 59% of organisations report that “much” or “nearly all” of their development and deployment is now cloud native.
- New adoption is slowing: 10% of organisations are in early stages or not using cloud native at all.
Kubernetes as the AI platform
The survey highlights a major convergence between AI and cloud native infrastructure, positioning Kubernetes as the preferred platform for running inference workloads at scale.
- Kubernetes adoption for AI inference: 66% of organisations hosting generative AI models use Kubernetes to manage some or all of their inference workloads.
- AI deployment frequency remains cautious: While infrastructure is ready, only 7% of organisations deploy models daily; 47% deploy occasionally.
- Most organisations are still AI consumers: 44% report they do not yet run AI/ML workloads on Kubernetes, underscoring the early stage of AI production maturity.
The survey also identifies a clear link between operational maturity and the use of standardised platforms, as teams increasingly adopt GitOps workflows and internal developer platforms to manage scale and complexity.
OpenTelemetry has emerged as a dominant force in the ecosystem, reflecting how observability is evolving from a siloed tooling decision into a strategic pillar of cloud native operations.
It’s not technical, it’s organisational
For the first time, the primary challenge to cloud-native adoption is not technical – it’s organisational. As more teams standardise on cloud-native tools, the biggest obstacles have shifted from tool complexity and training to internal communication, team dynamics and leadership alignment.
As Kubernetes becomes the platform of choice for AI workloads and organisations scale their deployments, the next wave of innovation will hinge on resolving cultural adoption barriers, investing in platform engineering and evolving security and observability standards.
“Enterprises are aligning around Kubernetes because it has proven to be the most effective and reliable platform for deploying modern, production-grade systems at scale – including AI – and because of the ecosystem and community that support it,” said Hilary Carter, senior vice president of research at Linux Foundation Research. “This year’s data shows that the next phase of cloud native evolution will be as much about people and platforms as it is about the tech itself. Organisations that invest in both will have a clear advantage.”
The production usage of Kubernetes now stands at 82% among container users and 66% of AI adopters are using it to scale inference workloads. Kubernetes is no longer a niche tool; it’s a core infrastructure layer supporting scale, reliability and increasingly AI systems.
Observability also continues to be critical as workloads become more dynamic. OpenTelemetry’s position as the second-highest-velocity CNCF project reflects strong momentum for vendor-neutral, standardised instrumentation. Teams now depend on real-time visibility to keep systems reliable and performant in production.

