Containerisation in the enterprise - Open Infrastructure Foundation: The flavours of containers

As businesses continue to modernise their server estate and move towards a cloud-native architecture, the elephant in the room is the monolithic core business application that cannot easily be rehosted without significant risk and disruption. 

These days, it is more efficient to deploy an application in a container than use a virtual machine. Computer Weekly now examines the modern trends, dynamics and challenges faced by organisations now migrating to the micro-engineered world of software containerisation.

As all good software architects know, a container is defined as a ‘logical’ computing environment where code is engineered to allow a guest application to run in a state where it is abstracted away from the underlying host system’s hardware and software infrastructure resources. 

So, what do enterprises need to think about when it comes to architecting, developing, deploying and maintaining software containers?

This post is written in full by Mark Collier in his capacity as the chief operating officer at the Open Infrastructure Foundation.

Collier writes as follows…

It’s universal: container management means Kubernetes. 

With nearly 100 certified Kubernetes distributions available to choose from, enterprises have lots of flavours of Kubernetes to evaluate. But the concept that unites them all, however, is open infrastructure. 

Open infrastructure, defined

Open infrastructure is a general term for integrated, open source software stacks that developers and infrastructure operators can use to run infrastructure — in this case, Kubernetes — their way. Thanks to open infrastructure projects like OpenStack, Kata Containers, Ironic, StarlingX and Airship, users have the ability to run Kubernetes in the environment that suits their applications and business needs. 

To illustrate the concept, here are five examples and organisations in which open infrastructure is deployed in production to support specific applications of containers, in some cases at massive scale. 

CERN: OpenStack for VMs & cContainers 

[As many people will know] CERN is the home of the Large Hadron Collider (LHC), a 27km circular proton accelerator that generates petabytes of physics data every year. To process all this data, CERN runs an OpenStack cloud (>300K cores) that helps scientists all around the world to unveil the mysteries of the universe. 

Open infrastructure is used to run all the IT services of the organisation, with OpenStack and Kubernetes playing central roles. Delivering these services with high performance and reliable service levels has been one of the major challenges for the CERN cloud engineering team, which constantly iterates the architecture and deployment model of its cloud control plane. Recently, the team moved its control plane from VMs into a Kubernetes cluster and has begun running on the container architecture at scale.

Baidu: Integrating Kubernetes, Kata Containers & OpenStack

Baidu, one of the largest Internet companies in [China and] the world, began a journey several years ago to offer its AI, cloud and edge computing services at massive scale by taking advantage of open infrastructure technologies such as Kubernetes, Kata Containers and OpenStack. 

Kata Containers is an open infrastructure project that provides lightweight VMs for nesting containers in production. The Baidu ABC Cloud Group and Edge Security Team integrated Kata Containers into the platform for all of Baidu internal and external cloud services, including edge applications. Their cloud products, including both VMs and bare metal servers, cover 11 regions in China with over 5,000 physical machines. 

Note: As explained here on Webopedia: nested virtualisation refers to virtualisation that runs inside an already virtualised environment. In other words, it’s the ability to run a hypervisor inside of a virtual machine (VM), which itself runs on a hypervisor. With nested virtualization, you’re effectively nesting a hypervisor within a hypervisor.

Verizon Media et al: Kubernetes on Bare Metal with Ironic

An open infrastructure jewel is the Ironic Bare Metal service. It allows users to manage bare metal infrastructure as they would virtual machines and it provides an ideal infrastructure to host high-performance cloud applications and frameworks, including Kubernetes.

Collier: Containerisation clearly has staying power, thanks to the flavour selection possible via open infrastructure.

Ironic software is now managing millions of cores of compute worldwide, turning bare metal into automated infrastructure ready for today’s mix of virtualised and containerised workloads. Deployments range from the massive (Verizon Media running one of the world’s largest OpenStack clouds) to more modest scale. The strong commercial ecosystem supporting Ironic-based solutions also includes large users and commercial providers like Red Hat, Mirantis, China Mobile and SUSE.

StarlingX & Airship

Two more open infrastructure projects worth mentioning in the ‘many flavours of containers’ conversation are StarlingX and Airship, both of which are themselves containerised.

StarlingX is a complete cloud infrastructure software stack for the edge used by the most demanding applications in industrial IoT, telecom, video delivery and other ultra-low latency use cases. The project integrates together well-known open source projects such as Ceph, Kubernetes, the Linux kernel, OpenStack and more. In the 4.0 release (August 2020) the software was certified as a conformant Kubernetes distribution of the v1.18 version. 

T-Systems and Verizon are now deploying StarlingX as part of their infrastructure to utilise the platform’s capabilities that make it ideal to deploy it as a distributed system to power use cases in the telecom industry.

Airship is a robust delivery mechanism for organisations who want to embrace containers as the new unit of infrastructure delivery at scale. Starting from raw bare metal infrastructure, Airship manages the full lifecycle of data center infrastructure to deliver a production-grade Kubernetes cluster. While Airship 2 remains in beta, development is quickly progressing towards a first-quarter general availability release. Airship 2 has been designated as a Certified Kubernetes Distribution through the Cloud Native Computing Foundation’s Software Conformance Program, guaranteeing that Airship provides a consistent installation of Kubernetes, supports the latest Kubernetes versions, and provides a portable cloud-native environment with other Certified Platforms. 

The Cloud Infrastructure Telco Taskforce (CNTT) — whose members include representatives from AT&T, Verizon Wireless, China Mobile and Deutsche Telekom  — chose Airship for its OpenStack reference implementation, providing the global telecom community with a model for deploying NFV workloads using Airship.

Infrastructure technology trends come and go, but containerisation clearly has staying power. Fortunately, open infrastructure makes it possible for organisations to adapt the many flavours of container technologies to suit their unique needs, all backed by innovations from developer communities throughout the world.

 

CIO
Security
Networking
Data Center
Data Management
Close