Why next-generation IaaS is likely to be open source

In this guest post, Chip Childers, CTO of open source platform-as-a-service Cloud Foundry, makes the case for why the future of public cloud and IaaS won’t be proprietary.

Once upon a time, ‘Infrastructure-as-a-Service’ basically meant services provided by the likes of Amazon Web Services (AWS), Microsoft Azure or Google Cloud Platform (GCP), but things are changing as more open source offerings enter the fray.

This is partly down to Kubernetes, which has done much to popularise container technology, helped by its association with Docker and others, which has ushered in a period of explosive innovation in the ‘container platform’ space. This is where Kubernetes stands out, and today it could hold the key to the future of IaaS.

A history of cloud computing and IaaS

History in technology, as in everything else, matters. And so does context. A year in tech can seem like a lifetime, and it’s really quite incredible to think back to how things have changed in just a few short years since the dawn of IaaS.

Back then the technology industry was trying to deal with the inexorable rise of AWS, and the growing risk of a monopoly emerging in the world of infrastructure provision.

In a bid to counteract Amazon’s head start, hosting providers started to make the move from traditional hosting services to cloud (or cloud-like) services. We also began to see the emergence of cloud-like automation platforms that could potentially be used by both enterprise and service providers.

Open source projects such as OpenStack touted the potential of a ‘free and open cloud’, and standards bodies began to develop common cloud provider APIs.

As a follow on to this, API abstraction libraries started to emerge, with the aim of making things easier for developers working with cloud providers who did not just want to rely on a few key players.

It was around this time that many of the cloud’s blue-sky thinkers first began to envisage the age of commoditised computing. Theories were posited that claimed we were just around the corner from providers trading capacity with each other and regular price arbitrage.

Those predictions proved to be premature. At that time, and in the years hence, computing capacity simply wasn’t ready to become a commodity that providers could pass between each other – the implementation differences were simply too great.

That was then – but things are certainly looking different today. Although we still have some major implementation differences between cloud service providers, including the types of capabilities they’re offering, we’re seeing the way forward to an eventual commoditisation of computing infrastructure.

While even the basic compute services remain different enough to avoid categorisation as a commodity, this no longer seems to matter in the way that it once did.

That change has largely come about because of the ‘managed Kubernetes clusters’ used by most public cloud providers now.

The shift has also been happening in the private sector, with many private cloud software vendors adopting either a ‘Kubernetes-first’ architecture or a ‘with Kubernetes’ product offering.

As Kubernetes continues its seemingly unstoppable move towards ubiquity, Linux containers now look likely to become the currency of commodified compute.

There are still implementation differences of course, with cloud providers differing in the details of how they offer Kubernetes services, but the path towards consistency now seems a lot clearer than it did a decade ago.

This more consistent approach to compute now seems as inevitable as the future of IaaS, made possible by the open source approach of Kubernetes.

CIO
Security
Networking
Data Center
Data Management
Close