Kubecon 2018: The rise and rise of Kubernetes

In this guest post, Jon Topper, CTO of DevOps and cloud infrastructure consultancy The Scale Factory, on how the growing maturity of Kubernetes dominated this year’s Kubecon.

KubeCon + CloudNativeCon Europe, a three day, multi-track conference run by the Cloud Native Computing Foundation (CNCF), took place in Copenhagen earlier this month, welcoming over 4,300 adopters and technologists from leading open source and cloud native communities.

The increase in popularity of container orchestrator software Kubernetes was a defining theme of this year’s show, as it moves beyond being an early adopter technology, to one that end-user organisations are putting into production in anger.

What is Kubernetes?

Kubernetes provides automation tooling for deploying, scaling, and managing containerised applications. It’s based on technology used internally by Google, solving a number of operational challenges that may have previously required manual intervention or home-grown tools.

This year the Kubernetes project graduated from the CNCF incubator, demonstrating that it is now ready for adoption beyond the early adopter communities where it has seen most use thus far.

Many of the conference sessions at the show reinforced the maturity message, with content relating to grown-up considerations such as security and compliance, as well as keynotes covering some interesting real-life use cases.

We heard from engineers at CERN, who run 210 Kubernetes clusters on 320,000 cores so that 3,300 users can process particle data from the Large Hadron Collider and other experiments.

Through the use of cluster federation, they can scale their processing out to multiple public clouds to deal with periods of high demand. Using Kubernetes to solve these problems means they can spend more time on physics and data processing than on worrying about distributed systems.

This kind of benefit was reiterated in a demonstration by David Aronchick and Vishnu Kannan from Google, who showed off Kubeflow.

This project provides primitives to make it easy to build machine learning (ML) workflows on top of Kubernetes. Their goal is to make it possible for people to train and interact with ML models without having to understand how to build and deploy the code themselves.

In a hallway conversation at the show with a member of the Kubernetes Apps Special Interest Group (sig-apps), I learned there are teams across the ecosystem working on providing higher order tooling on top of Kubernetes to make application deployment of all kinds much easier.

It will eventually be the case that many platform users won’t interact directly with APIs or command line tools at all.

Commodity computing

This ongoing commodification of underlying infrastructure is a trend that Simon Wardley spoke about in his Friday keynote. He showed how – over time – things that we’ve historically had to build ourselves (such as datacentres) have become commoditised.

But spending time and energy on building a datacentre doesn’t give us a strategic advantage, so it makes sense to buy that service as a product from someone who specialises in such things.

Of course, this is the Amazon Web Services (AWS) model. These days we create managed databases with their RDS product instead of building our own clusters of MySQL.

At an “Introducing Amazon EKS” session, AWS employees described how their upcoming Kubernetes-as-a-Service product will work.

Amazon is fully bought into the Kubernetes ecosystem and will provide an entirely upstream-compatible deployment of Kubernetes, taking care of operating the control servers for you. The release date for this product was described, with some hand-waving, as “soon”.

In working group discussions and on the hallway track, it sounded as though “soon” might be further away than some of us might like – there are still a number of known issues with running upstream Kubernetes on AWS that will need to be solved.

When the product was announced at AWS re:Invent last year Amazon boasted (based on the results of a CNCF survey) that 63% of Kubernetes workloads were deployed on AWS.

At this conference, they came with new figures stating that the number had dropped to 57% – could that be because both Google Cloud and Microsoft’s Azure already offer such a service?

Wardley concluded his keynote by suggesting that containerisation is just part of a much bigger picture where serverless wins out. Maybe AWS is just spending more time on their serverless platform Lambda than on their Kubernetes play?

Regardless of where we end up, it’s certainly an exciting time to be working in the cloud native landscape. I left the conference having increased my knowledge about a lot of new things, and with a sense that there’s still more to learn. It’s clear the cloud native approach to application delivery is here to stay – and that we’ll be helping many businesses on this journey in the years to come.

CIO
Security
Networking
Data Center
Data Management
Close