Maksim Kabakou - Fotolia
How has container security changed since 2020, and have we taken it too far?
While containers are now one of the most popular ways to deploy applications, it is fair to say that the adoption and implementation of security best practice to govern their use has not kept up
With the ongoing growth in the adoption of containers, we looked at what CISOs needed to do to ensure they were secure in a Security Think Tank column for Computer Weekly in June 2020.
Over the past two years, containerisation has continued to spread significantly – Gartner predicted in 2020 that it would be adopted by 75% of global organisations by the end of 2022, up from 30% at the time. Its popularity is greatly helped by the efficiency it enables compared to a more traditional operating system-based virtualisation.
The latter results in a lot of memory and disk space, etc, being consumed by applications that are never used. By contrast, a container-based approach breaks virtualisation into smaller components and deploys only what is needed, thereby reducing the associated computing space and time required.
But although containers are now one of the most popular ways to deploy applications, the adoption and implementation of security best practice and principles around their use has not kept up. The modular nature of containers can mean they don’t fall within the “wraparound” security that would be applied if they were combined. In other words, in focusing on such small functions, security can get left out of the equation.
This points to the requirement for container security to be considered at a granular level, specific for each component, but also in a broader context. It therefore needs to be an integral element of an organisation’s overall security culture. This includes securing the container deployment environment and ensuring trusted images – the layers of files used to create the container – are used that are backed by adherence to company policies, best practices, and the growing adoption of cloud security solutions.
Indeed, containerisation turns the spotlight on security – and continues to highlight the challenges in protecting newer, highly dynamic infrastructure and IT services, while also blending this with protection strategies built around traditional, fixed assets.
Ensuring that container services operate as intended, with the least possible leeway for misuse or unanticipated outcomes, requires cyber security teams to have visibility over the threats and vulnerabilities facing IT applications and services. This enables them to monitor for possible attacks and mitigate against them, as well identify new and emerging threats.
Secure by design, least privilege and zero trust
Best practices such as secure by design, least privilege and a zero-trust approach are also called for.
DevSecOps is a key part of the secure-by-design approach as it brings the importance of security into everyone’s realm. DevOps, in combining the development and operations elements of building applications into a continuous lifecycle, starts to dissolve the line between the two sides to view them as one.
DevSecOps embeds security into this lifecycle, including (among others) threat modelling, static and dynamic code review, container security configuration, vulnerability assessment, security monitoring and access control. This results in security being built into every stage of the development and operations lifecycle. In this agile approach, everyone has a role to play in making that cycle secure, everyone has a responsibility, and everyone has visibility.
Organisations that take a zero-trust path to security ensure that the identity of any user or device accessing the network is authenticated, authorised and continuously verified. This approach lends itself well to smaller modules, or containers, as security checks can be undertaken each time they are put into use to ensure the access is authorised, and that the application in question is being deployed for the purpose for which it was intended.
Controlling access to dynamic services (applications deployed and scaled when they are needed) at a container level is only possible with dynamic, software-defined supporting services. Microsegmentation, for example, builds networks between points that need connecting via software. These network segments exist only for the time that the connection is required.
In short, it is critical to continuously verify the identity of the requesting user or device, adopt the principle of least privilege so that users only have access to what they require to do their jobs, and establish a foundation for policy-based (software-defined) access controls.
Maintaining a real-time view on container vulnerability is important because new types of threat are continually emerging – the speed with which they are remediated could be critical.
Traditional vulnerability scanners can sometimes fall short in areas of specialisation such as the security of specific applications – dedicated ERPs can often be a ‘“blind spot” in terms of their visibility to core SOC operations, for example – and container vulnerability scanners are no different.
The highly specialised nature of containerisation requires a similar focus on purpose-built tools. The use of vulnerability management tools that are specific to container technology is recommended by NIST, along with integrating processes that look to resolve the vulnerabilities once they have been identified.
In conclusion, the efficiency that containerisation enables is driving its uptake. Like most technology developments, this introduces new security challenges. Addressing these re-enforces the need for visibility and the basic principles of security by design and least privilege.
As with all solutions where the focus is on agility and flexibility, careful attention must be paid to the design principles, standards and integrations which must be applied to secure them in order to take full advantage of the benefits they can offer.