Maksim Kabakou - Fotolia

Security Think Tank: Cyber metrics need to be meaningful

What should be the key cyber security risk indicator for any business?

Metrics may be very easy or difficult to generate, but they need to be meaningful and relevant to the business to aid decision-making. Typically, two types of metric can be used. Firstly, there are general metrics that measure the impact of cyber attacks, for example:

  • Direct company financial losses (such as fraud and theft).
  • Direct customer financial losses.
  • Damage to reputation.
  • Cost of business disruption.
  • System downtime due to an attack.

Secondly, there are metrics related to reducing the potential for attack, for example:

  • Percentage of business partners/suppliers with effective cyber security policies and, separately, percentages of those that have access to your network.
  • Time taken to mitigate critical vulnerabilities.
  • Regularity and speed of patching.
  • Number of users with administrator access and, of those, the number using hosts with direct internet access and email.
  • Time taken to deactivate former employees’ access.
  • User behaviour (security training initiatives, or breaches due to user error).

Whereas the former metrics are reasonably consistent and coherent, and can be considered together to present a coherent picture, the latter are disparate and the business purpose differs for each, so it can be more difficult to map to business risk. Also, the cost of information capture does need to be related back to the value of the utility developed from it.

Ideally, a risk assessment will have been done when designing the system, or as part of ISO 27001 accreditation and then updated regularly. This should identify the risks to the business from a cyber attack and critical assets that give rise to that risk and therefore need to be protected, and so establish a potential impact value. This can be a starting point for identifying business-relevant metrics that help to reduce that risk. 

Business-relevant metrics will differ in different types of organisation and the risk assessment will help to identify this and allow metrics to be tailored to focus on critical systems. For example, where customer-facing services are provided over the internet, there will probably also be a back-office systems supporting less critical functions. In this case, the internet-facing system should be prioritised and the metric should be defined and collected separately for each system. 

Having a single metric for both may distort the view of the critical asset or lead to prioritisation of the less critical asset to improve the figures.

When defining the metric and collecting data, this should relate as closely as possible to the real risk. For example, many companies use simulated phishing campaigns to test their users on identifying malicious emails. The solutions that support this are designed to provide metrics that will show how well the workforce has performed. This can provide improving results over time, particularly if the same style of phishing email is used from one campaign to the next. 

However, although there are some solutions that do diversify the emails, this is still not measuring performance on real email. Also, most companies will have technical measures that block the majority of phishing emails, so only a few (possibly only 5-10%) get through to the user. 

A better approach may therefore be to measure the percentage of phishing emails blocked by technical measures, the percentage of those that got through that were clicked on by users, and the percentage reported by users. In many cases, the latter is more important because a report from a single user will allow the overall problem to be investigated and cleaned up.

Similarly, in order to use metrics to show real improvement, they need to be presented in terms of the real risk, not just a bland figure. One commonly quoted metric is dwell time. This refers to the time an attacker is inside the system before detection. This is typically reported as being an average of anything from 100-200 days across all organisations. 

Read more Security Think Tank articles about cyber risk

Reducing the average dwell time for an organisation from 100 to 10 days may seem a huge improvement, but in terms of damage to the company, there may be little difference. In the design of any measurement, it is important to define value and parameters when designing the collection approach.

In a reasonably sophisticated attack, after gaining a foothold on an initial host, the attacker will move laterally through the network to maintain persistence. They will then move to achieving their objective. 

The target should be to identify the attack before the lateral movement phase, because after this, it will be much more difficult to clean up. Based on a recent report on the most sophisticated attackers, this gives you as little as 20 minutes. 

The next best target should be to stop the attack before the attacker achieves their objective, for example exfiltration of data. This is likely to be no more than a few days after the initial attack. The percentage of attacks stopped before the attacker moves laterally and before they achieve their objective are therefore much more meaningful metrics than a simple dwell-time figure.

Hopefully, many organisations will not be targeted by many sophisticated attacks, but the same principle can be applied to things like ransomware. For example, what percentage of ransomware attacks are stopped before they cause damage, and what is the time taken to bring damaged hosts back on line?

In summary, there are many metrics you can use, but what you do use, how you choose to measure them and how the results are presented will depend on the specific risks to the organisation and what you are trying to achieve with the results.

If you want to get a real picture of your cyber risk, you need to start from the risk assessment and set meaningful targets. If you want to demonstrate that a solution is providing value, you may choose a different approach. In designing a metric, clarity of objective, and clarity of what good looks like, is critical to ensuring its operational value.

Read more on IT risk management

CIO
Security
Networking
Data Center
Data Management
Close