# Calculating the risk equation

## Risk management

### RELATED TOPICS

It is not possible to protect information assets until you know what they are and what they are worth. Mike Barwise lays down some ground rules

In an ideal world, information security management would be a simple matter of applying all relevant security measures - immediately. But in reality our resources are limited, so we must select a subset of the available options to maximise real security within a budget. We have to prioritise, and the tool that enables us to do this is risk assessment.

Sadly, risk assessment is the most misunderstood and ill-performed component of information security, and corporate security implementations are often poorly focused, for three main reasons:

• Failure to understand what risk really is

• Unreliable evaluation criteria

• Concentration on technologies at the expense of business issues.

We all use the word risk in everyday life. But when the weather forecaster mentions a "risk of rain" he is talking about a possibility. Climbing a long ladder is "risky" and in this instance it means dangerous. Neither of these is really risk: strictly speaking it is a combination of possibility and danger.

Uncertain classifications are a major contributor to the failure of IT risk assessments, but we continue to use them because they are convenient, and we are unaware how subjective our decision-making is. Although the quality of expert judgement has been widely researched in the context of large-scale capital projects, and found to be generally poor, professionals have not yet taken these findings on board.

Predictably, almost all the published IT risk management guidance stops short of describing how to establish the values you need to use in risk models. Measures of likelihood are generally built around statements such as "twice a week" or "once in three years", which lead us to confuse statistical probability with the realities of event occurrence in the operational context. A general lack of statistical expertise among practitioners causes us to miss the point that statistical probability is not the whole story.

Basically, there are two possible scenarios: either a given breach occurs or it does not - a bit like tossing a coin. Given a perfectly balanced coin, over a large number of tosses we would expect heads to come up half the time (a probability of 0.5). But this says absolutely nothing about whether heads will come up next.
Similarly, a probability of a given IT security breach occurring "once in five years" does not mean it will not happen twice next Tuesday.

And, unlike the coin, which does not change its fairness while we gamble, IT security threats and vulnerabilities are constantly changing in both nature and prevalence. If we are to use the likelihood of a threat as a component of prioritisation strategy we need a large amount of reliable evidence of past occurrences to determine how its successes and failures have been distributed across targets and in time, as well as straight probability figures.

Unfortunately, this information does not exist for most threats and vulnerabilities, mainly because the IT community has not kept good enough records. Even in the case of Code Red, which is relatively well documented, information is decreasing as time passes and attack patterns decrease in intensity.

Even if this were not the case, using technical threats and vulnerabilities as the sole basis for risk assessment is not satisfactory. One can get bogged down in comparisons of attacks and their fixes and it is easy to fall into the trap of thinking in crude terms such as "firewall security" or "anti-virus", forgetting the business purposes for which the technologies are deployed. This often leads to extreme technical precautions which are completely undermined from the business perspective by omissions in other areas. The ultimate aim is to protect business information assets, not to secure the perimeter or the server, so we need to know where business value is concentrated.

We must abandon technocentric risk models based on threat-likelihood and establish business-oriented security priorities on the basis of relatively solid criteria. Only once this has been done should we start to investigate the technical issues relevant to our identified priorities.

The first step is to identify our information assets by examining all our business processes, and to determine how each asset is handled at every stage of each process. The media and infrastructure components involved in those processes must be identified and documented (including phone calls, faxes and Post-it notes).

Then it is possible to assign a value to each information asset by establishing the financial loss that would result from a total breach of that asset in the context of the given business process. With the aid of legal advice you can then establish, say, five broad categories of risk and identify which information assets represent the greatest potential losses. These then become the highest security priorities.

The next stage is to map those key assets back to the media and infrastructure components they share, in order to calculate the aggregate value of each. Only then do we start to think technically and investigate the threats - not just hackers and viruses: more information is jeopardised daily by bad business procedures than by all the Internet threats combined. Once the mapping is complete, we can prioritise security measures on the basis of aggregate potential losses for groups of assets that map to securable entities such as local area network segments, media or business units.

This method will work. The only question is whether it can, strictly speaking, be called risk assessment. It is probably closer to "requirements analysis". This is a programme, not a project. There must be a continuous process of incremental review and improvement to keep the prioritisation database up to date, otherwise it will cease to be reliable. Because the process is methodical and based on facts, it can be adjusted to correct errors as they are identified.

Buying off-the-shelf risk management software and ticking the boxes will output some pretty reports, but how will you know it is delivering real security where it is needed?

How not to calculate risk
In IT security the risk of a breach would be the financial loss likely to be incurred as the result of a breach, multiplied by the probability of that breach occurring.

However, IT security professionals work to many alternative, more complicated and less precise models of risk.

The most prevalent is to subdivide risk into three elements: threat, vulnerability and likelihood. Each is evaluated using subjective criteria, and they are then combined in some empirical way to arrive at a notional value for risk.

Threat and vulnerability are typically evaluated intuitively using verbal hazard scales such as low, medium, high. Because of their subjectivity, these categories are extremely difficult to assign to threats or vulnerabilities, or indeed, to interpret with any degree of confidence.

A first-class reference covering the pitfalls in risk decision making is:

• Uncertainty, Morgan and Henrion. Cambridge, 1992.
ISBN 0521427444

This was last published in January 2003

## Features

#### Start the conversation

Send me notifications when other members comment.

## SearchCIO

• ### Former Equifax CIO's indictment should be a red flag for IT execs

A former Equifax CIO has been indicted for insider trading following the company's 2017 data breach. Will it force IT execs to ...

• ### Two data scientists offer advice on breaking down siloed data

Data scientists offer insight into why the age-old problem of siloed data persists and some concrete advice to CIOs on how to ...

• ### ISACA: Build security into artificial intelligence hardware

A new paper on how to fight off malicious AI recommends adding security features to AI chips. ISACA's Rob Clyde explains why ...

## SearchSecurity

• ### Russian government hacking earns U.S. sanctions, warnings

The U.S. Treasury department levied sanctions for Russian government hacking as a joint alert from the FBI and DHS confirms ...

• ### Following Equifax data breach, executive charged with insider trading

News roundup: A CIO has been charged with insider trading after the Equifax data breach. Plus, Trump blocked Broadcom's ...

• ### Leaked report on AMD chips flaws raises ethical disclosure questions

Researchers announced AMD chip flaws without the coordinated disclosure procedure, and a leak of the research to a short seller ...

## SearchNetworking

• ### Ethernet bandwidth costs fall to a six-year low

Ethernet bandwidth costs in data center switches fell to a six-year low in 2017. Crehan Research reported cloud provider demand ...

• ### Yahoo Japan deploys intent-based network with Apstra AOS

Yahoo Japan deploys an Apstra intent-based network to oversee multiple vendors. Cisco touts Los Angeles Hospital, as well as the ...

• ### Is it best to buy or build a network automation system?

Bloggers explore the question of buying versus building a network automation system, the challenges of hyper-converged ...

## SearchDataCenter

• ### IBM cloud services to secure mainframes out to the edge

Big Blue will introduce IBM cloud services that use blockchain, containers and its z14 mainframes to deliver improved security ...

• ### Four disadvantages of hyper-converged infrastructure systems

Problems with scalability and unexpected licensing costs can create problems for organizations that deploy hyper-converged ...

• ### IBM Power9 servers seek market inroads to AI, cloud

IBM follows up its first Power9 server with a raft of systems designed to appeal to a wider array of markets -- most notably, AI ...

## SearchDataManagement

• ### Streaming tool from StreamSets eyes data in motion for GDPR

StreamSets software for inspecting big data brings governance to data in motion. Such capabilities may find more use as the ...

• ### Data expert: GDPR deadline is an opportunity, not a burden

There is stress as the EU's General Data Protection Regulation compliance deadline nears, but the GDPR privacy movement is a good...