CNCF: AI is the new workload, cloud-native is the new OS

This blog post is part of our Essential Guide: KubeCon + CloudNativeCon 2025 news coverage

The great and the good among the (widely open source) cloud-native community gathered together this week for KubeCon + CloudNativeCon 2025 (Americas) in Atlanta, Georgia.

The message from the Cloud Native Computing Foundation (CNCF) was abundantly clear this year.

AI is the new workload and cloud-native is the new operating system… and, together,  they define the open future of computing.

You can put that on a t-shirt, you can etch it in stone, you can get a temporary tattoo of it (or a real one) and you can use that as a mission statement and guiding edict for the way cloud computing will continue to evolve between now and the end of the current decade (if not longer) as the weight of the CNCF member community appears to be fully behind this ethos and approach.

The CNCF underlined its position on this ‘state of the nation’ as it continues to work towards building sustainable ecosystems for cloud-native software. It announced the launch of the Certified Kubernetes AI Conformance Program(me) at the event. The new programme introduces a community-led effort to define and validate standards for running AI workloads reliably and consistently on Kubernetes.

Common standards

The team says that the growing use of Kubernetes for AI workloads highlights the importance of common standards. 

According to Linux Foundation Research on Sovereign AI, 82% of organisations are already building custom AI solutions and 58% use Kubernetes to support those workloads. With 90% of enterprises identifying open source software as critical to their AI strategies, the risk of fragmentation, inefficiencies and inconsistent performance is rising. This certified programme is said to respond directly to this need by providing shared standards for AI on Kubernetes.

“As AI moves into production, teams need consistent infrastructure they can rely on,” said Chris Aniszczyk, CTO of CNCF. “This initiative will create shared guardrails to ensure AI workloads behave predictably across environments. It builds on the same community-driven standards process we’ve used with Kubernetes to help bring consistency as AI adoption scales.”

Cloud-native and AI are not separately developing technologies; they are part of the same movement to develop systems more intelligently. This was the opening statement made by CNCF exec director for cloud & infrastructure, Jonathan Bryce.

Inference is the difference

“When we look at the AI landscape… training (model development), inference (model serving) and applications & agents (when we connect models to humans, tools, apps, data and other agents) are all growing… and it is incredible to see how many open source technologies are impacting all three of these levels. We anticipate an increasing amount of work focused on inference in the months ahead and we see this reality playing out across many organisations,” said Bryce, during his Atlanta keynote.

Always straight down to the command line, CNCF speakers followed Bryce to showcase working demos with hands-on Kubernetes.

OSTIF

Amir Montazery, managing director of OSTIF (Open Source Technology Improvement Foundation) also spoke during the keynote to explain why his team says that open source security “takes a village” today. 

With open source (bug and CVE) fixes being a key priority for his team, Montazery explained that OSTIF’s goal is to build internal and external confidence in any project’s security posture. He noted that because “CNCF has always been so open on transparency” in its history, his team will release full audit report results in full in time for next year’s Europe event in the spring in Amsterdam.

“Organisations can extend the cloud-native benefits of agility, flexibility and scalability to AI workloads. For example, the control, consistency and resource scheduling provided by container orchestration can be applied to the most challenging phase of AI: delivering fast, cost-effective and efficient inference at scale,” explained Rosa Guntrip, director within the AI business unit, Red Hat. “Through technologies like vLLM and llm-d’s integration with Kubernetes, enterprises can use a distributed inference framework that allows them to meet the inference demands of enterprise production environments. This enables enterprises to be as adaptable as possible for the future, with the ability to run any model, on any accelerator across any cloud.”

Guntrip underlines her comments and says that this open foundation helps organisations stay at the forefront of innovation while providing the transparency and interoperability that enterprises need as they move their AI projects into production.

“Cloud-native isn’t about replacing what came before; it’s about extending it. As AI becomes the defining workload of our time, enterprises need platforms that can bridge traditional VMs and modern containerised applications. That balance is what allows innovation to happen without creating new silos,” said James Sturrock, director of systems engineering at Nutanix.

Peter Farkas, CEO at Percona is always voluble on this topic. How much does he agree with the statement, “​​AI is the new workload and cloud-native is the new operating system”… and that together, they define the open future of computing?

Kubernetes, containers, operators, policies

“AI is unquestionably the demand driver for reshaping compute, storage and networking… and ‘cloud-native’ (containers, Kubernetes, operators, policies) does function like a distributed operating system: scheduling, packaging and governing how modern applications run at scale,” said Farkas. “Two caveats matter. First, AI is a dominant workload of the many, not the only one. Transactional and analytical systems, streaming and integration remain critical. They feed AI rather than get replaced by it. Second, some AI and other database workloads still run on bare metal, specialised accelerators, or managed services that bypass parts of the CNCF stack. What you run the database on should be a case‑by‑case engineering decision.”

On the “open future” question, he agrees in principle, but says it’s not automatic. 

“Openness is a choice we each make in licenses, interfaces, data portability and customer freedoms. Our stance is simple: keep the data layer open and portable across clouds, meet customers where they are and reduce lock‑in while delivering outcomes. That’s how we align AI, cloud‑native and openness in practice,” added Percona’s Farkas.

Self-service structure, not stack

Benjamin Brial, founder of Cycloid says that AI and cloud-native are driving both the conversations and the noise around the next era. 

“But I would argue that calling cloud-native the ‘new operating system’ misses the point. What matters isn’t the stack, it’s how organisations are structuring self-service, governance, sovereignty and sustainability around it. Cloud-native has matured, it’s no longer about chasing new Kubernetes distributions, it’s about building developer experiences that actually scale safely and efficiently,” asserted Brial.

He says that “AI is the latest workload” to test how ready we are for that maturity. 

“What we’re seeing now is every organisation, from startups to public institutions, trying to figure out how to run AI efficiently, securely, and sustainably. Cloud Native gives us the flexibility to do that, but the real shift isn’t technical, it’s cultural. It’s not about spinning up bigger clusters or more GPUs. It’s about building platforms that give developers the freedom to experiment while keeping control and minimising waste. That’s actual innovation,” added Brial.

CNCF project & partner news

CNCF executive director for cloud & infrastructure, Jonathan Bryce (left) and Chris Aniszczyk, CTO of the Linux Foundation & CNCF (right).

Other related news commentary at this year’s KubeCon + CloudNativeCon 2025 Americas included updates on Falco, a CNCF graduated project, which now integrates with Stratoshark to connect real-time security alerts with forensic-level capture and analysis tools. The Fluent community released Fluent Bit v4.2, an update to its open-source telemetry processor.

Apptio, an IBM company, announced new FinOps solutions from IBM Cloudability and IBM Kubecost designed to enhance visibility and optimise cloud costs. Dash0 launched Agent0, its agentic AI platform for Observability, pointing out the limitations of existing observability tools in lowering mean time to repair (MTTR) and stressing the need for helpful tools to handle large amounts of telemetry data and complex queries. 

Hyperscaler heights

Google is celebrating the 10th anniversary of GKE, highlighting its evolution as the best-managed container platform for all workloads, including AI and its continuous innovation in Kubernetes to address the demands of large-scale AI deployment and efficient inference. AWS announced integrated backups for Amazon EKS, providing a fully managed service to simplify data protection for Kubernetes workloads by centralising backup operations.

Hyperscaler wannabe company Oracle announced Oracle AI Database 26ai and multi-cloud universal credits to give developers a more unified way to build, train and deploy intelligent applications. 

Honourable mentions also go to Valkey 9.0, which brings long-requested features and improvements to classic features updated for today’s workloads. This includes atomic slot migrations, hash field expirations, numbered databases in cluster mode. The PyTorch Foundation has welcomed Ray as its newest foundation-hosted project, aiming to deliver a unified open-source AI compute stack and minimise AI computing complexity.

Other highlights for the show included news that Helm 4, the first major update in six years to the Kubernetes package manager, has been released, marking Helm’s 10th anniversary. Also, the CNCF Technical Oversight Committee (TOC) has voted to accept KServe as a CNCF incubating project. KServe joins a growing ecosystem of technologies tackling real-world challenges at the edge of cloud native infrastructure.

Overall impression & takeaways

Every KubeCon + CloudNativeCon is different, obviously, i.e. all the projects move forward with so much fervour and momentum, each gathering feels like a fresh round of innovation (yes, sorry, we had to use that word) and there’s always loads to engage with at the KubeCrawl and CloudNativeFest drinks at the end of the day.

Then again (as the late-great Anthony Bourdain said) – everything changes, everything stays the same i.e. a lot of what you get at KubeCon + CloudNativeCon is the same sandwich bags and same no-frills mess hall setups to eat in, which is actually comforting and easy to work with and it certainly allows everyone more time to think about the code. 

Onward to Amsterdam in 2026!