Maksim Kabakou - Fotolia
Software developed and deployed to the internet today is built on the shoulders of giants. Even a simple “Hello World” web-page depends on hundreds of millions of lines of code.
The surface area of modern systems is large. Applications and tools are run in spaces managed by operating systems and components which depend on open source or proprietary libraries and frameworks. Access to protocol implementations such as TCP/IP are required for network communication and access is mediated by routers, load balancers and switches.
Security researchers and hackers discover many software vulnerabilities each day. For example, due to a “huge increase in the number of requests” Mitre – an organisation which maintains a database of Common Vulnerabilities and Exposures (CVEs) – recently warned users of delays registering new vulnerabilities as they work hard to keep up with existing flaws.
Given this landscape, it is simply not enough to build software to protect against attackers in the first place. Each component also needs people and processes in place to respond to newly discovered vulnerabilities by publishing fixes to that software. In addition, any organisation presiding over the operation of IT systems must take responsibility for the following:
- Ensuring each key piece of software they depend on has a maintenance group – often a third party – who will fix vulnerabilities and publish patches;
- Making sure there are continuous processes in place to accept and apply these fixes in software environments under their control;
- Identifying all software running on corporate systems.
If your team does not know what software you are running, you have little chance of ensuring your application of patches is up to scratch.
Read more about security patching
- Security Think Tank: Ignoring software patching spells trouble
- Security Think Tank: Software patching needs executive sponsorship
- Security Think Tank: Five strategies for dealing with software security updates
- Security Think Tank: Six alternative strategies to centralised security patching
- Security Think Tank: Tackle vital patching challenge with risk-based approach
- Security Think Tank: Using vulnerability management to support the patching process
Infrastructure as code and automated configuration management will give you a much better control over the components and versions of components you are using, versus manual or “snowflake” approaches to configuring systems.
Minimising dependencies reduces the number of different components you need to keep an eye on. Equally, minimising the number of distinct versions of each component you are running will help. It requires discipline but, as systems scale, this attention becomes increasingly worthwhile.
Mainstream operating system OS distributions provide a great "one stop shop" for well tested, rapidly patched and easily deployed system components. Think twice about rolling your own package of a webserver or database package, if the version supplied by your current version of Centos or Ubuntu, for example, is good enough.
Developers should have a conversation with the rest of the team before adding a new application dependency on a library or framework. Are there vulnerabilities outstanding? Is there a group in place to respond if there are? If a vulnerability is found in an interesting but unsupported library you found on Github, you will need to fix the vulnerability yourself.
If you are using a cloud platform some patching may be handled by suppliers or other teams. There will be significant differences between software as a service, platform as a service and infrastructure as a service offerings, so make sure you take the time to understand your patching responsibilities.
Know when there is a vulnerability
It is unusual to have a single source of truth that includes comprehensive information on all vulnerabilities in your system.
Operating system (OS) update frameworks, such as Windows Updates or YUM/apt-get, and application dependency managers, such as Maven or pip, have features to inform you when there are new versions of components available.
Adding automated tools such as OWASP dependency check to your build process can flag dependencies with outstanding CVEs. OS suppliers and providers of major frameworks and programming languages should provide security notices either via email, or RSS feeds. Make sure you are subscribed to the relevant feeds and, if it is a supplier's, ensure they have a process and you are subscribed to it.
Social media such as Twitter can also be a great source of intelligence, not only about “mega-vulnerabilities” such as Shellshock and Heartbleed, but also about more niche and subtle hacks you may not be aware of.
Mailing lists such as Full Disclosure and Open Source Security can be an illuminating source of information on vulnerabilities and exploits, although the sheer volume on these lists can make them hard to keep up with.
A team mentality can help with this; giving just one person or a distant team the responsibility to watch all forms of intelligence coming in can easily lead to a single point of failure or breakdown of communication.
Design your process to patch quickly
Any system change carries some form of integration and regression risk, which could cause disruption or compromise the integrity of the service. This is, however, is balanced by the risk of not making the change at all.
By implementing a process where patches are integrated on a staging system for testing and verification, before being promoted to a production environment, you will be able to determine if the patch will work properly in the live environment.
Sometimes the risk of attack is greater than the integration risk. For example, on a bastion hosts connected directly to the internet, you might choose to get the patch on and accept a small risk it does cause an issue.
Organisations with significant technical debt sometimes isolate hard-to-patch systems, but multistep attacks are now common enough to make this a high-risk final resort.
The longer it takes to deploy a patch or an update, the more time attackers have to hack into your system. You need to be able to carry out sufficient testing and verification quickly enough that your system is not left vulnerable to attack. Environments where major deployments happen every three months – and where applying a hotfix requires significant manual effort – might not provide the right balance of risks.
Base your technical change process on continuous delivery principles that enable rapid and flexible deployment of changes to systems.
Use an automated deployment pipeline to promote and test new builds of application components and infrastructure component configurations. Automate integration testing to enable rapid system proving and get feedback on integration problems quickly.
Design and build your system for zero-downtime deployment techniques, so you can make an update to your system as many times as you need, any time of day, with low risk.
Use infrastructure as code and automated configuration management techniques to help drive infrastructure configuration from a model stored in source control. Apply the same continuous delivery and automated integration testing approaches to your infrastructure, avoiding any manual steps by focusing on automation techniques to deploy changes as quickly and safely as possible within your organisation.
As well as being a hot topic in the industry, DevOps and continuous delivery approaches enable the team thinking and velocity of change required to keep up with the constantly evolving threat landscape.
Investing in these approaches will not only improve your ability to meet business goals, but can also help keep your systems up to date with patches.
Jim Gumbley is a security expert at global IT consultancy ThoughtWorks.