tostphoto - stock.adobe.com

How APAC enterprises can keep pace with container security

For all the promises of containers, changes in architecture and practices associated with the technology bring new challenges and opportunities

From Thailand’s Ascend Money to Singapore’s PropertyGuru, containers and microservices are increasingly being adopted by companies in the Asia-Pacific region that are looking to speed up application development and become more agile.

In the case of Ascend Money, a Bangkok-based payment technology firm with operations across Southeast Asia, the use of containers and microservices has streamlined software development across six countries and boosted the morale of its developers.

Yet, for all the promises of containerisation, changes in architecture and practices associated with the technology bring a new set of challenges and opportunities.

“The added consistency can significantly improve security, while, on the other hand, the increased release speed and complexity can be a challenge for security teams who do not reduce their manual toil and rethink the traditional models,” said Jerome Walter, field chief information security officer at Pivotal Asia-Pacific and Japan.

Traditional security models have used layers of security products to compensate for the issues of IT systems, resulting in tensions and slower processes that are anathema to agile development. The alternative approach focuses on outcomes and leverages technology tools and practices to achieve them.

Automation is key to maintaining security in a world of continuous delivery. Even as container platforms let developers focus on customer needs and produce code faster, security teams are now contributing to the same platform, building and automating security features and controls inside the tools used by developers rather than around them.

This is crucial as the automation and scale of container infrastructures can magnify the impact of supply chain attacks against widely used container images and public container repositories – if poor vulnerability management practices continue to prevail.

Docker Hub hack

In April 2019, Docker Hub, the world’s largest library and community for container images, was hacked, exposing the usernames, hashed passwords, and GitHub and Bitbucket access tokens of some 190,000 users – or 5% of Docker’s customer base.

Nilesh Jain, vice-president for Southeast Asia and India at Trend Micro, said while the scale of the hack wasn’t massive, the implication is worrisome as Docker is used by some of the biggest companies in the world, including PayPal, Visa and GlaxoSmithKline.

In addition, Kenna Security’s 2019 project revealed that many of the top 1,000 most popular container images from Docker Hub contain some type of vulnerabilities.

“Over 20% of those containers have at least one vulnerability considered high risk,” said Jain, adding that the oldest container on the list had 1.5 million pulls and is home to more than 431 open vulnerabilities. The highest number of vulnerabilities – totalling in excess of 2,000 – were found in the Keyvanfatehi/sinopia container, which was pulled some 1.7 million times.

Underscoring the importance of securing Docker hosts and containers, Jain advised enterprises to apply virtual patching to newly discovered vulnerabilities. In addition, real-time malware detection for the file systems used on Docker hosts – and within containers – is also indispensable and can ensure long-term security of these containers.

But threat detection and vulnerability patching are just part of several aspects of container security. In a technical brief on the topic, Red Hat details what it calls the “10 layers of container security”.

Securing containers holistically

Vishal Ghariwala, regional product management director for application platform products at Red Hat Asia-Pacific, said these layers – from host operating systems to network, storage and the application programming interfaces (APIs) that grant access to containerised applications – are necessary to secure containers holistically.

Among the layers in Red Hat’s model, the security of container hosts, which are typically Linux servers, is of utmost importance, according to Ghariwala, “because if there’s a vulnerability, the entire container management platform will be compromised”.

The next two aspects of container security are about using trusted sources and registries, a point that was also stressed by Pivotal’s Walter. “A container platform stores the built images into a registry before pushing them into production. If the registry is compromised, it would allow an attacker to modify the images deployed in production. Thus, strict control of the registry is critical,” said Walter.

Red Hat currently operates a container catalogue that provides certified container images for various middleware, databases and language runtimes. “If there’s a major vulnerability, say in Node.js, all Node.js containers from Red Hat will be automatically rebuilt with all the new security fixes,” said Ghariwala.

On the registry side, the open source juggernaut has built strict controls that govern access to the container registry on its OpenShift container platform. When developers build images, there’s also a set of policies and processes, including vulnerability scanning, that must be applied before their images are certified to be available on the registry.

Managing the build process that spans the continuous integration, continuous delivery (CI/CD) pipeline is also key to sussing out security vulnerabilities, according to Ghariwala. Using scanners such as Black Duck Hub and JFrog Xray to check against known vulnerabilities in real time, developers can identify security loopholes and rebuild a container image at the infrastructure, middleware and application levels if need be.

And when it comes to container deployment, it is necessary to control what can be deployed within a cluster. “An example could be preventing containers that require root access, arising from code that is not well-written, to be deployed,” said Ghariwala.

Improper separation

Pivotal’s Walter noted that while an adequately managed container orchestration platform will reduce security risks to hosts or other containers, it is still common to see the improper separation of namespaces and file systems.

Privileged containers are another concern, he noted, as administrators of the container can effectively gain privileges on the host. “Similarly, hosting all containerised applications on a shared flat network increases the risk of lateral movement from one compromised application to another.

“Security teams should ensure the networking layer in use limits traffic and only allows applications to communicate with one another based on declarative business logic,” said Walter.

Finally, as enterprises deploy more containerised applications, API management will be central to managing and authenticating API calls to keep traffic flowing smoothly, as well as to support web single sign-on (SSO).

In a federated deployment model, Red Hat’s Ghariwala said API management, along with authorisation and authentication capabilities, will ensure the same level of security and access controls across multiple public clouds and on-premise datacentres.

Containers and monolithic applications, though, will continue to co-exist for the foreseeable future, and most companies today have an application modernisation strategy to re-factor, re-platform or re-host their application portfolio.

Read more about containers and microservices in APAC

To manage security in an environment dotted with containerised and legacy applications, a common approach is to embrace two-speed IT: traditional security policies and controls for long-lived servers, and a more dynamic method of continuous audit and verification for immutable and ephemeral workloads.

“This approach offers the benefit of using the appropriate practices in each ecosystem and avoids slowing down innovation with traditional policies not built to keep up with the dynamic nature of containers,” said Walter.

But it would be an opportunity often missed, according to Walter, to not take advantage of the immutable, dynamic and ephemeral nature of containers to protect existing monolithic servers.

“Refactoring critical functions used by users into microservices shields away hard-to-patch applications from the user devices (often the source of compromise). 

“This approach mitigates the risk of a persistent attack by reducing the need to change monolithic back-end applications and reduces the noise which often prevents detection of unauthorised activity.”

Read more on Application security and coding requirements

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

-ADS BY GOOGLE

SearchCIO

SearchSecurity

SearchNetworking

SearchDataCenter

SearchDataManagement

Close