Black Duck's open (source) truth: 'when good containers go bad'

This is a guest post for the Computer Weekly Open Source Insider column written by Tim Mackey in his capacity as technology evangelist for open source applications and container management & security firm Black Duck Software.

As detailed on Computer Weekly here, containers encapsulate discrete components of application logic provisioned only with the minimal resources needed to do their job.

Containers are easily packaged, lightweight and designed to run anywhere — multiple containers can be deployed in a single Virtual Machine (VM).

Mackey writes as follows…

Considering the fact that managing container infrastructure in a production environment becomes challenging due to the scale of deployment, one of the biggest problems is trust—specifically trust of the application.

Quite simply, can you trust that all containers in your Kubernetes or OpenShift cluster are performing the tasks you expect of them?

Container assertions, you gotta make ’em

To answer those questions, you need to make some assertions:

  • That all containerised applications were pen-tested and subjected to static code analysis;
  • That you know the provenance of the container through signatures and from trusted repositories;
  • That appropriate perimeter defences are in place and authorisation controls are gating deployment changes.

These assertions define a trust model but omit a key perspective – the attacker profile.

When defending against attackers at scale, you need to understand what information they use to design their attacks.

Shifting deployment responsibilities

When you’re using commercial software, the vendor is responsible for deployment guidance, security vulnerability notification and solutions for disclosed vulnerabilities. If you’re using open source software, those responsibilities shift.

When Black Duck analysed audits of over 1000 commercial applications we found the average application included 147 unique open source components. Tracking down the fork, version and project stability for each component is a monumental task for development teams.

Potential attackers know how difficult it can be to put together this information and they exploit the lack of visibility into open source components. In order to be effective, there are two primary ways for a hacker to create an attack on a given [open] component.

Component attack #1

First, they contribute code in a highly active area of the component to plant a back door, hoping that their code won’t be noticed in a rapidly evolving component.

Component attack #2

Second, hackers look for old, stable code. Why? Older code may have been written by someone who has left the project, or doesn’t recall exactly why it was written that way.

The goal in both cases is to create an attack against the component, so hackers test, fail and iterate against the component until they’re successful or move on to another component.

Hacking the hackers

However, even when attacked by a prepared hacker, you can make it much harder for them to mount an attack. Consider an attacker who recognises they’re in a container and assumes there are multiple containers with the same profile. As an administrator, you can randomise the memory load location, set up kernel security profiles, and enable roles based access. These are a few changes that make it harder for hackers to know whether they created a viable attack or not.

We know that containerisation has increased the pace of deployment, creating questions of trust for many administrators.

Component visibility

Key to protecting your applications in production is by maintaining visibility into your open source components and proactively patching vulnerabilities as they are disclosed.

If you assume from the outset your containers will be compromised, you can prepare by making changes that make it much harder to mount an attack from a compromised container.

This article from is based on Tim Mackey’s presentation from 20th October at DevSecCon, London: “When Good Containers Go Bad.”