Maksim Kabakou - Fotolia
We often hear we need to keep software up to date by applying security patches as soon as possible, but this is not as simple as it sounds, because patching operating systems can cause some bespoke software to misbehave. So what exactly should we be patching, how quickly should we be patching and how do we ensure critical software runs properly on a newly patched system?
It is true that prompt patching is essential. When a new patch is released, attackers will use readily available software to compare the patch to the existing application to identify the vulnerability being patched. This can be done in minutes, rather than hours, meaning hackers often release malware (or adapt existing malware) to exploit that vulnerability within hours of the patch being released.
Estimates vary, but it is generally recognised that around 80% of attacks use vulnerabilities for which patches already exist, and most use vulnerabilities which could have been patched over a year before the attack. The statistics also show the majority of attacks use the most common exploits, so patching these first, in priority according to the threat level, will also help to reduce risk. Most guidance recommends patching within a week, but in practice this can be extended to a month (except for critical vulnerabilities) with little increase in risk.
Patching needs to include not only operating systems, but also office applications, bespoke applications and third-party applications. To be sure your patching is up to date, it is advisable to employ vulnerability checking tools or services which scan the environment and report any missing patches or known vulnerabilities. This will highlight any failed patches and unknown applications outside your patching regime.
Patching is becoming easier than it once was. In many cases it can be automated by the software provider and, if using software as a service (SaaS), it is done by the provider. Where using only Microsoft Office applications running on a Windows host, rigorous testing may not be necessary.
But mission critical applications will still need to be tested on newly patched operating systems before the patch is deployed across the enterprise. This inevitably delays the deployment of the patch, but while you may not be thanked for bringing down a business critical application to mitigate a “theoretical” risk of a cyber attack, you still need to patch.
Patching is therefore a risk management exercise of balancing the risk of an unpatched vulnerability against the risk of taking down a critical application with an untested patch. Where time is needed to test, it may also be possible to mitigate the risk through the use of existing security appliances such as web application firewalls, network firewalls and host firewalls, while mitigating intrusion to an acceptable level through other means.
Read more from Computer Weekly's Security Think Tank about patching strategies
If testing goes badly and a business-critical application fails, you will need to mitigate the risk until the application is updated to run on the patched system. Where you don’t have access to a test system, one approach can be a phased rollout starting with a limited number of users to minimise any impact caused by patching. In all cases, you will also need a rollback plan in case things go wrong.
In summary, patching should be treated as a risk management exercise. Patch as soon as is practical and use automated patching where possible to reduce cost. When testing critical systems, mitigate the vulnerability by other means to reduce the risk of the patch interfering.