This is a guest post for Computer Weekly Open Source Insider written by Markus Eisele, developer adoption lead for EMEA at Red Hat & Chris Jenkins, EMEA chief technologist for infrastructure and platform security at Red Hat.
There has been much made across the IT industry on the subject of containers and the various mechanics of the more composable and increasingly orchestrated elements of technology that go to make up this section of the total IT fabric.
As software applications developers will know, containers are inherently part of the base foundation of Linux and always have been. A Linux container is simply a process that runs on Linux and shares a host kernel with other containerised processes.
Linux has also always been inherently part of the base foundation of Red Hat, naturally of course. So as we now work with containers in the era of cloud and deeper levels of application service virtualisation, what do we need to think about to keep containers running effectively… and above all, running securely?
This question forms a good part of the backdrop behind why Red Hat acquired StackRox this February 2020. We know that security, governance and compliance can act as a bottleneck in application modernisation. So StackRox’s software provides visibility and consistency across all Kubernetes clusters, helping to reduce the time and effort needed to implement security while streamlining security analysis, investigation and remediation
Red Hat’s Eisele and Jenkins write as follows…
While we can run containers on Linux easily enough, it is Kubernetes that allows us to address the orchestration of containers more directly. The facts are simple:
- Our contemporary approach to container-based software development features a myriad of working components that raise new types of security challenges.
- No development shop can really rely on one single vendor to fix all of its interwoven service security issues in highly distributed computing environments.
So where do we go next if we want to reap the advantages of container implementation in live production environments?
The open source choice trade-off
Perhaps the first thing we need to remember is the trade-off facing us here. The great thing about open source is that there are so many choices. The not so great (sometimes) thing about open source is that there are so many choices. So when it comes to security, where do you go and try and lock down and glue-fix the solidity needed to make various container-based projects work together securely?
From our view with Red Hat OpenShift, we have worked to address the ‘quality engineering’ level needed to lock-in security and provide that glue across the whole software development lifecycle.
But in real world working practice, users need to be able to apply a variety of security ‘flavours’ at various different tiers and levels of any given instance.
So that means we provide container security controls that apply where we can secure the operating system that the container runs on.
It also means looking after the security of the containers themselves, addressing network security across clusters and then further levels of container security designed to look after auditing, logging, identity management and so on.
Automation, across the containerboard
Using Red Hat OpenShift as a case in point, we have worked to build a technology enabler that is capable of running containers with the control to be able to apply patches, upgrades and other forms of maintenance requirements and control mechanisms in the most automated way possible.
Crucially, we have looked at the need to be able to apply this intelligence throughout the entire container lifecycle i.e. from development, to deployment, to updates, to augmentations and maintenance and onwards to secure container retirement, when and if that needs to happen due to the needs of the wider IT system in development.
Nobody wants to get that call on a Sunday afternoon to perform functions like above when they could have been automatically engineered for – and this is key to how container implementation can now escalate further if it is engineered prudently. That prudent care through container automation factor extends perfectly to scalability control to be able to bring a new node into operation when needed. This is fully automated Infrastructure-as-Code, so we can replicate the previous process when needed (especially and if someone has left the company).
Containers now leaking into infrastructure
Scaling the backbone manually on its own is a massively complex task. The container is now leaking into the infrastructure layer, but from a DevSecOps perspective, this is a good thing because all the workings of the software system have now coalesced into a more central location. Alongside that coalescence, we also get cross-fertilisation, so that we have oversight for all working parts of the system. This is the point at which we can set a baseline to manage security threats and container controls.
Coming full circle then, as we adopt more use of containers and do so with key security control, we must remind developers that a Fort-Knox secure infrastructure is nothing if they don’t adhere to secure programming principles.
We can give people (coders in this instance, but hey yes, users too) a secure environment to operate in, but if people start to do stupid things inside of it then that’s not a good idea. For container development and implementation, this means we can provide some guidelines, provide trusted registries and create an operator hub that we encourage developers to stay inside of. At the end of the day, anyone can download anything from the Internet and create shadow IT in various forms, so we should always keep that reality factor in mind too.
Goldilocks and the distributed monolith
This story is all about avoiding the creation of a container-based application infrastructure and a set of instances that really just represents a kind of anachronistic distributed unsecured monolith.
If you give your developers an empty Kubernetes environment to play around in, there are risks involved. If you give developers an overly-controlled extremely prescriptive environment, then they will get frustrated, because they want enough freedom to do what they are good at (developing) — and developers are notorious for finding workarounds, so that’s not a good idea either.
What we’re searching for is the distributed, containerised, functional and secure middle way. This is the Goldilocks moment i.e. porridge (or in this case agile application development and automated management) that is not too hot and not too cold, but just right.
Used correctly, containers represent a huge and secure developer advantage which then provides quantifiable business benefits , whatever kind of bear you happen to be.