It is not possible to protect information assets until you know what they are and what they are worth. Mike Barwise lays down some ground rules
In an ideal world, information security management would be a simple matter of applying all relevant security measures - immediately. But in reality our resources are limited, so we must select a subset of the available options to maximise real security within a budget. We have to prioritise, and the tool that enables us to do this is risk assessment.
Sadly, risk assessment is the most misunderstood and ill-performed component of information security, and corporate security implementations are often poorly focused, for three main reasons:
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
- Failure to understand what risk really is
- Unreliable evaluation criteria
- Concentration on technologies at the expense of business issues.
We all use the word risk in everyday life. But when the weather forecaster mentions a "risk of rain" he is talking about a possibility. Climbing a long ladder is "risky" and in this instance it means dangerous. Neither of these is really risk: strictly speaking it is a combination of possibility and danger.
Uncertain classifications are a major contributor to the failure of IT risk assessments, but we continue to use them because they are convenient, and we are unaware how subjective our decision-making is. Although the quality of expert judgement has been widely researched in the context of large-scale capital projects, and found to be generally poor, professionals have not yet taken these findings on board.
Predictably, almost all the published IT risk management guidance stops short of describing how to establish the values you need to use in risk models. Measures of likelihood are generally built around statements such as "twice a week" or "once in three years", which lead us to confuse statistical probability with the realities of event occurrence in the operational context. A general lack of statistical expertise among practitioners causes us to miss the point that statistical probability is not the whole story.
Basically, there are two possible scenarios: either a given breach occurs or it does not - a bit like tossing a coin. Given a perfectly balanced coin, over a large number of tosses we would expect heads to come up half the time (a probability of 0.5). But this says absolutely nothing about whether heads will come up next.
Similarly, a probability of a given IT security breach occurring "once in five years" does not mean it will not happen twice next Tuesday.
And, unlike the coin, which does not change its fairness while we gamble, IT security threats and vulnerabilities are constantly changing in both nature and prevalence. If we are to use the likelihood of a threat as a component of prioritisation strategy we need a large amount of reliable evidence of past occurrences to determine how its successes and failures have been distributed across targets and in time, as well as straight probability figures.
Unfortunately, this information does not exist for most threats and vulnerabilities, mainly because the IT community has not kept good enough records. Even in the case of Code Red, which is relatively well documented, information is decreasing as time passes and attack patterns decrease in intensity.
Even if this were not the case, using technical threats and vulnerabilities as the sole basis for risk assessment is not satisfactory. One can get bogged down in comparisons of attacks and their fixes and it is easy to fall into the trap of thinking in crude terms such as "firewall security" or "anti-virus", forgetting the business purposes for which the technologies are deployed. This often leads to extreme technical precautions which are completely undermined from the business perspective by omissions in other areas. The ultimate aim is to protect business information assets, not to secure the perimeter or the server, so we need to know where business value is concentrated.
We must abandon technocentric risk models based on threat-likelihood and establish business-oriented security priorities on the basis of relatively solid criteria. Only once this has been done should we start to investigate the technical issues relevant to our identified priorities.
The first step is to identify our information assets by examining all our business processes, and to determine how each asset is handled at every stage of each process. The media and infrastructure components involved in those processes must be identified and documented (including phone calls, faxes and Post-it notes).
Then it is possible to assign a value to each information asset by establishing the financial loss that would result from a total breach of that asset in the context of the given business process. With the aid of legal advice you can then establish, say, five broad categories of risk and identify which information assets represent the greatest potential losses. These then become the highest security priorities.
The next stage is to map those key assets back to the media and infrastructure components they share, in order to calculate the aggregate value of each. Only then do we start to think technically and investigate the threats - not just hackers and viruses: more information is jeopardised daily by bad business procedures than by all the Internet threats combined. Once the mapping is complete, we can prioritise security measures on the basis of aggregate potential losses for groups of assets that map to securable entities such as local area network segments, media or business units.
This method will work. The only question is whether it can, strictly speaking, be called risk assessment. It is probably closer to "requirements analysis". This is a programme, not a project. There must be a continuous process of incremental review and improvement to keep the prioritisation database up to date, otherwise it will cease to be reliable. Because the process is methodical and based on facts, it can be adjusted to correct errors as they are identified.
Buying off-the-shelf risk management software and ticking the boxes will output some pretty reports, but how will you know it is delivering real security where it is needed?
How not to calculate risk
In IT security the risk of a breach would be the financial loss likely to be incurred as the result of a breach, multiplied by the probability of that breach occurring.
However, IT security professionals work to many alternative, more complicated and less precise models of risk.
The most prevalent is to subdivide risk into three elements: threat, vulnerability and likelihood. Each is evaluated using subjective criteria, and they are then combined in some empirical way to arrive at a notional value for risk.
Threat and vulnerability are typically evaluated intuitively using verbal hazard scales such as low, medium, high. Because of their subjectivity, these categories are extremely difficult to assign to threats or vulnerabilities, or indeed, to interpret with any degree of confidence.
Recommended further reading
A first-class reference covering the pitfalls in risk decision making is:
- Uncertainty, Morgan and Henrion. Cambridge, 1992.
Others worth reading include:
- Information Security Risk Assessment - Practices of Leading Organizations, www.gao.gov/special.pubs/ai00033.pdf
- Choice under Risk and Uncertainty, cepa.newschool.edu/het/essays/uncert/choicecont.htm
- Handbook of Information Security Management (Risk Management and Business Continuity Planning), www.cccure.org/Documents/HISM/223-228.html
Mike Barwise is a consultant at www.ComputerSecurityAwareness.com