Maksim Kabakou - Fotolia
Digital transformation projects are being carried out across most organisations, meaning IT estates have become more complex, with different technologies working together to enable the data flows and business processes that are crucial to the effective operation of the enterprise.
However, this interconnectivity means that disruption to any system within that flow can affect operational outputs, and is also available for attackers to take advantage of, meaning they can move laterally through an organisation's network and systems.
Core security processes such as vulnerability management must adapt to address the new risks posed by all of this interconnectivity.
From a security standpoint, it is easy to suggest that all known vulnerabilities should be resolved, but the implications of applying patches and fixes need to be viewed from a wider perspective.
Downtime of critical systems, time to test patches before pushing through to production environments, and the availability of personnel to carry out all the necessary activities are just some of the factors that determine an organisation’s ability to remediate vulnerabilities in its systems. Combine those with the increasing volume of vulnerabilities reported and the true scale of the problem becomes clear – along with the reality that 100% effective vulnerability management is nigh on impossible to achieve.
Despite the gloomy prognosis, a solid foundation of vulnerability and patch management is still an essential control. The problem, however, is that organisations have ever-expanding lists of vulnerabilities that need to be managed. There will be those that have been on the radar for some time, but for which there is no patch, no downtime possible, or no way of applying mitigating controls, for example. There will also be applications, servers or networks that cannot simply be replaced or upgraded, and ones for which downtime is never scheduled.
In addition, programmes of this nature are likely to focus on vulnerabilities that pose a high risk to the organisation – particularly those detected in critical, or “crown jewel”, systems – as it is logical to try to address those that have common exploits, are baked into the toolkit of any entry-level attacker or could significantly impact the organisation.
But a focus on high-risk vulnerabilities potentially leaves many lower-risk ones available to attackers, who use them as network entry points, chaining them together, rather than exploiting each in isolation. As a result, they can explore networks, applications and all of the interconnectivity in place to exploit what they can, regardless of the Common Vulnerability Scoring System (CVSS) number, or equivalent rating, given to each vulnerability based on its ease of exploitation and the damage it can do.
Suddenly, a low-risk vulnerability on a remote server can be an open door to an application that was previously believed to be protected.
The complete view
Many organisations can benefit from taking a step back. Rather than focusing on the critical and high-risk vulnerabilities as single entities, a holistic view of the IT estate helps to identify likely targets, the key data flows through the organisation, the people whose system access could be used maliciously, should it be compromised, and so on.
Andrew Morris, Turnkey Consulting
Taking a step back can also enable organisations to reassess what is perceived as critical. There might be a critical vulnerability on an internal application, for example, but if that application is not connected to anything else and does not store any highly confidential information, then it poses less risk to the organisation than other vulnerabilities.
Organisations should use threat intelligence to understand what is likely to be attacking them and the methods that could be utilised. Knowing that web applications are probably a key target allows remediation to be prioritised there, for example – although this can be easier said than done if the internal security team has no control over the cloud application, and SOC II reports (which provide assurance that a service is provided securely) state there is no issue.
Red teaming, in which companies simulate real attack methods to test their defences, is another option, as is using frameworks such as MITRE ATT&CK, which map systems, processes and people to determine how attackers would gain access to an organisation. By understanding the methods used, and what can be exploited, vulnerability management teams can prioritise what needs to be protected, with the overall result being a more secure enterprise.
Critical assets – the systems within an organisation’s network considered to be either higher priority targets for attackers, or more valuable to the organisation – should be identified and protected from external attack. But as noted above, the lower profile systems are also attractive to infiltrators, and once in the network, an attacker may look like an internal resource, and therefore go undetected.
Guarding against this risk requires critical assets to also be protected from internal threats. Networks can be segmented into trust levels, putting additional barriers between them and the potential entry points, or a zero-trust model can be adopted, which ensures all digital interactions are continuously validated.
Systems-based risk management
The multiple interconnected systems on which organisations rely mean a disruption to one could significantly impact the business. This can occur for a range of reasons – including human error, system overload and untested configuration – but it’s also a route for attackers to obstruct operations.
Controls, including business continuity and disaster recovery planning, user training and awareness, and effective monitoring, can be introduced to protect the processes and reduce the impact to the business should an event occur. But the first step is to understand the risks by shifting from a components-based approach to risk management to a systems-based one, which identifies and analyses the interactions between each element of an interconnected IT system network to determine the overall risks to operational output.
The shift to cyber resilience
The more interconnected digital organisations become, the bigger they get, and the more they rely on technology – which opens them up further to external threats. Closing down every vulnerability is too difficult to achieve; instead, organisations need to shift towards cyber resilience, which can be supported by a layered approach to security.
Read more from the May 2022 Security Think Tank series
- Solving for complexity in the network by Mike Lloyd of Redseal.
- Defenders must get out ahead of complexity by Jack Chapman of Egress.
- Identify, assess and monitor to understand attack paths by Rob McElvanney of PA Consulting.
- Understanding attack paths is a question of training by Mike Gillespie of Advent IM.
- Yes, zero trust can help you understand attack paths by Paul Holland of the ISF.
- To follow a path, you need a good map by Petra Wenham of the BCS.