Containerisation in the enterprise - Red Hat: precision-picking practice for pot platform prowess

As businesses continue to modernise their server estate and move towards cloud-native architectures, the elephant in the room is the monolithic core business application that cannot easily be rehosted without significant risk and disruption. 

These days, it is more efficient to deploy an application in a container than use a virtual machine. Computer Weekly now examines the modern trends, dynamics and challenges faced by organisations now migrating to the micro-engineered world of software containerisation.

As all good software architects know, a container is defined as a ‘logical’ computing environment where code is engineered to allow a guest application to run in a state where it is abstracted away from the underlying host system’s hardware and software infrastructure resources. 

So, what do enterprise need to think about when it comes to architecting, developing, deploying and maintaining software containers?

This post is written in full by Erica Langhi in her capacity as senior solution architect EMEA at Red Hat. 

Langhi writes as follows…

The biggest upside of containerisation for the enterprise is the promise of more modular, scalable and resilient applications. However, running containers requires orchestration, monitoring and maintenance, which in turn represents a marked increase in complexity. 

To fully capitalise on the benefits offered by containerisation and to adequately manage the complexity that comes with it, it’s essential to ensure your underlying infrastructure is fit for your needs. So what should you be considering when establishing your container platform?

Choosing the right OS 

Red Hat’s Langhi: pick your (container) pot with precision.

A surprisingly common pitfall that occurs when running containers is choosing the wrong operating system to run them on. While you can run containers on any OS, there are very few use cases where it’s advisable to run an enterprise-grade container platform on anything other than Linux.

There are good reasons for this rule. Containers themselves leverage key Linux concepts and capabilities, such as SELinux, namespaces and control groups to run applications inside of them. In addition, virtually every tool used in container orchestration – including Kubernetes – was also built using Linux concepts and uses Linux tooling and APIs to manage containers.

All the above means that, at the enterprise level, Linux is a no-brainer to run a container platform on, since this will minimise the waste of system resources and developer time.

Looking beyond Kubernetes

Something that’s often ignored in the conversation around containers is what exactly Kubernetes does. Colloquially, we often over-simplify Kubernetes and describe it as an application that runs containers, but this isn’t quite right.

Rather, it’s more accurate to say that Kubernetes is a bundle of APIs, utilities and tools that handle computing resource management and container orchestration. This bundle does not provide everything that’s needed for a container platform. Along with the tools provided by Kubernetes, a complete and functional container platform needs networking, storage, a registry, logging and monitoring. All of these, along with an orchestration tool like Kubernetes, must sit atop the underlying OS.

Depending on the resources and needs of your organisation, you can choose to either invest in a commercial framework or build your own container platform. When it comes to the economic case, a commercial framework is often the most compelling choice, principally for the time it saves that would otherwise be spent developing, maintaining, installing and configuring a comprehensive container platform. 

In addition, a commercial framework can already have done the legwork (and trial-and-error testing) in developing and configuring useful ‘out-of-the-box’ features for an enterprise environment. Chief among these is cloud-agnosticism, which can allow a container platform to operate seamlessly across different cloud providers.

Remembering the 4 C’s

When it comes to choosing your container platform, you should always make sure that it ties into your own specific organisational needs…. so if you decide to choose a commercial framework, there’s a good rules-of-thumb test to assess how well it suits your needs – the 4 C’s:

  • Code – what kind and level of code contributions is the vendor making?
  • Customers – are there already real customers using the platform? To what extent do their operational needs resemble your own?
  • Cloud – where does the container platform run? Which cloud providers can you use with it?
  • Comprehensive – how complete is the portfolio of products and solutions – do they fit the needs of your entire team? Does the platform provide the scalability you’re looking for?

There’s a final point to consider: a container platform might work well by itself, but it should never be implemented in isolation from the rest of your organisation’s operations and strategy. If a container platform hinders the ability to achieve other goals, then you shouldn’t feel afraid to go back to the drawing board with your container platform choice.

Picking a container platform is not a one-off choice.

Instead, you should think of your container platform as a piece of living infrastructure that will need regular iteration and change – just as the containers sitting atop it do.



Data Center
Data Management