kalafoto - Fotolia

Beyond Docker and Kubernetes: The container ecosystem continues to evolve

Enterprise interest in container technologies is on the rise, and organisations need to get clued up on who does what and how it can benefit them

This article can also be found in the Premium Editorial Download: Computer Weekly: How container technology is evolving

Containers have come a long way over the past several years, evolving from a niche technology into a key platform for implementing modern cloud-native applications and services, and the ecosystem continues to evolve as adoption grows.

As a concept, containers have been around for many years as a method of partitioning up the resources of a computer, which is what virtual machines also do.

While virtualisation operates at the bare metal level, containers are delivered from the operating system kernel, and essentially provide a separate execution environment for each individual application or code module.

Enterprises largely focused on using virtual machines until Docker gave containers a new lease of life by combining the technology with tooling that made its platform the perfect vehicle for agile development.

As containers are more lightweight and speedier to deploy than virtual machines, they also gained favour for enabling organisations to adopt a microservices-based architecture and implement DevOps initiatives.

Since the Docker platform launched five years ago, the containers ecosystem has expanded rapidly. This is good because the technology initially lacked many of the supporting tools and functions – such as orchestration and load balancing – that have grown up around virtual machines, prompting developers to rush to fill the gaps.

Building out the ecosystem

On the orchestration side, there is now growing acceptance that Kubernetes has largely won that race. It is not only used in a growing number of container platforms for on-site deployment, but all the major cloud providers offer container services that incorporate Kubernetes as the orchestration layer.

Meanwhile, moves have been made to establish greater standardisation in the basic technology that underpins containers, such as the container runtime (the engine that actually runs the containers), and file formats for storing and distributing container images.

On the runtime side, the Open Container Initiative (OCI) was founded to oversee this, under the aegis of the Linux Foundation, and Docker contributed runc: a reference implementation based on its own technology, which offers basic functionality.

Docker then incorporated runc into a more feature-rich runtime called containerd for its own use, and subsequently passed that on to the Cloud Native Computing Foundation (CNCF), the same body that oversees Kubernetes.

Docker also uses containerd in its own products. Because it incorporates runc, containerd is still compatible with OCI specifications.

Sometime later, a new working group formed at the OCI to create specifications for a standard container image format. Docker also had a hand in creating this specification, and incorporated the resulting OCI format into its own platform as the Docker V2 image manifest.

As a result, there are emerging standards for both container runtimes and container images. All container runtimes are expected to eventually comply with the OCI standards, meaning that if other parts of the infrastructure are also OCI compatible, it should be relatively easy to mix and match software components from different sources as part of a container deployment.

Solving the container security conundrum

A sticking point for enterprises wanting to use containers is how secure they are, because they do not offer the same level of isolation between instances that is enforced by the hypervisor in a virtual machine deployment.

This is because all containers running on a host machine access resources via calls to the same shared kernel, which leaves open the possibility that a potential vulnerability may allow code in one container to gain access to others.

New developments are seeking to address this in a couple of different ways. The OpenStack-backed Kata Containers project, which hit version 1.0 recently, follows the tactic of creating a lightweight virtual machine that acts like a container.

It accomplishes this by using a hypervisor that is compatible with the OCI specifications, and thus looks to the outside world as a container runtime. The hypervisor creates a lightweight virtual machine that encapsulates a minimal operating system kernel and the actual container.

This is similar to the way that some existing platforms integrate container support. The Pivotal Container Service (PKS) runs containers inside virtual machines on VMware’s vSphere or the Google Cloud Platform, while Amazon’s AWS runs several container services, all of which put containers inside EC2 instances.

While these all use standard virtual machines for container hosts, Kata Containers appears to use a lightweight virtual machine masquerading as a container runtime.

Google has developed another solution to making containers more secure, through an open-source project called gVisor. This does not use a hypervisor, but instead acts like an extra kernel that sits between the host kernel and the container application.

Read more about container technology

The gVisor kernel runs with normal user-level privileges and intercepts system calls from the application, performing the work to service them instead. In other words, gVisor acts like a proxy or buffer layer, preventing the application from directly accessing the host kernel or other resources.

Both gVisor and Kata Containers carry the drawback of adding extra performance overheads and potential application compatibility issues. The latter is particularly so for gVisor, with Google warning that it does not support every single Linux system call.

Elsewhere, the broader containers ecosystem continues to expand, with third-party tools and platforms emerging on a regular basis to fill some of the missing pieces required to build an operational container infrastructure.

Some of these have been developed to provide persistent storage for certain workloads, with examples such as StorageOS or PortWorx. Other tools provide monitoring or advanced networking capabilities, and some projects have centred on building container image repositories.

Other vendors and projects have focused on building a platform around containers to create a turnkey delivery pipeline supporting the entire build, test and deployment cycle of modern cloud-native applications, such as CircleCI and GoCD.

Arguably, many of the cloud providers such as Amazon already deliver such functionality, of course, while traditional platform as a service (PaaS) products such as Red Hat’s OpenShift have morphed into developer platforms based around containers.

Containers may not be as mature a technology as virtual machines, especially in the area of management and orchestration, but the market is evolving rapidly as containers become the tool of choice for application development in the cloud era.

Next Steps

Learn Docker best practices with this interactive book

Read more on Containers

CIO
Security
Networking
Data Center
Data Management
Close