Risk assessment - consider the consequences

There’s an interesting new blog on the subject of Information Security Risk by Chris Hayes that you can read at http://risktical.com/.

One of the first posts ponders the question “What is risk?” and comes up with the following definition:

The probability of a threat overcoming security controls resistance to exploit a vulnerability that results in a loss.

That’s a good place to start and I usually carry on to describe the following formula:

risk = threat * vulnerability * cost (which itself is the sum of operational +reputational + revenue costs)

There is an alternative way of considering risk which is to say that it is the very process of defining exactly WHAT you are trying to protect, from WHOM you are trying to protect it and most importantly, HOW you are going to protect it (see here).

But what about also considering catastrophic risks: for instance business continuity in the wake of a terrorist attack, or total loss of the Internet for an extended period of time. If you consider some of the improbable threats then you might be right to dispel the risk altogether given the above formula. But let’s come back to an example I quoted some time ago regarding the case of the town council that banned hanging baskets after they ruled there was a risk they could fall from lampposts and injure the public.

The risk was not considered from the likelihood for there being emperical evidence of head injuries from falling flower baskets but from the potential consequences. In other words, as unlikely as it may seem, it could happen and if it does somebody will be seriously injured and there will be expensive consequences as a result.

We could argue that if we were to consider everything in this way that we would never get out of bed in the morning because we might tread on the cat. But it’s about what we are prepared to accept as risk. The town council decided it wasn’t prepared to accept any risk of it’s hanging baskets falling onto heads.

So, consider the consequences and not always the likelihood.

Join the conversation


Send me notifications when other members comment.

Please create a username to comment.

Thanks for the plug Stuart – glad you stumbled across the blog. I would submit the following: The risk equation you defined (risk = threat * vulnerability * cost) does not accurately reflect the frequency of loss, control resistance, or which aspect of “threat” is in play. I use the FAIR risk taxonomy and methodology. Within the FAIR taxonomy; vulnerability is derived by comparing the capability of our threat community to our control resistance. Threat event frequency is how often we expect a threat to actually attempt to attack our asset. Loss event frequency is derived from vulnerability and threat event frequency. Thus my claim that risk = loss even frequency * expected loss ($). Catastrophic risks are often considered tail-risk events; worst case loss events. Low loss event frequency but a high impact. I think it is wise for information risk assessors and decisions makers to differentiate between expected risk values and tail risk values (assuming a risk scenario warrants it). I have seen senior level decision makers make mitigation funding decisions to reduce tail risk while not necessarily reducing expected loss. A simple example of expected loss vs. worst case loss would be a PCI scenario. A non-compliant company may get fined a few thousand a month vs. a company that suffers a breach, exposes a lot of records, and all of a sudden faces hundreds of thousands of dollars in fines, notification, litigation, etc…
i have been a service tech for some 29 years and there has never been a risk assessment done on my job,i work on office equiptment such as faxes printers copiers etc. i also use the transport system to get around so i am not in one place of work for very long. how does this affect my safety and others and what are the concequences of the above? this not only involves me but some 150 other techs in the same company thank you colin