Although most companies have information security measures in place, few have carried out a proper risk analysis. John Kavanagh asks the experts how to perform a systematic risk assessment
It seems remarkable in a world hugely dependent on IT that risk assessment and management can be described by analyst firm Gartner as subjective and a trial and error process. Yet Gartner is not alone in this view.
“There are information risk analysis textbooks and methodologies and standard codes of practice, some going back 15 years or more. But until relatively recently very few organisations really took this bull by the horns and tackled it in a systematic way,” says Andrew Wilson, senior project manager at user body the Information Security Forum.
“There has been some isolated use of risk analysis, but many use it in a piecemeal way, perhaps only on high-risk projects, and not necessarily following a methodology. You really only get big value from it if you do it right across the organisation.
“The trouble is that people open the book and it looks really complex and a lot of work. How do we start, how do we roll this out, how does it tie in with what we do? There has been a lot of doubt about the whole thing and uncertainty about how to start.”
There is also confusion, says
Paul Hansford, a member of the European advisory board of the International Information Systems Security Certification Consortium, known as (ISC)2, and a principal consultant at Siemens Insight Consulting.
That confusion is due to many factors, such as different people using terms in different ways, he says. “One person’s risk is another person’s impact. The IT manager might say the big threat is viruses and hackers, and the impact is the system being unavailable until it is cleaned.
“But to the business that impact is a risk: the risk of not being able to do business. And a security specialist might describe what the IT manager calls a virus threat as a vulnerability instead.
“Things are also complicated by the changing nature of the IT threats or vulnerabilities. Hacking is now much more about making money than about personal notoriety, for example.
“Perceptions change too. After 9/11 a lot of companies shelved some security projects to put much more emphasis on business continuity.”
Confusion over the changing threats is underlined by several studies.
“There has been a vast drop in the number of new viruses and worms: ‘only’ one in every 91 e-mails is now viral,” says Graham Cluley, senior technology consultant at security specialist Sophos.
“But the overall level of malware is increasing, with Trojans, spyware and phishing now the preferred methods of attack. Our research shows that Trojans now outweigh viruses and worms by four to one, compared to two-to-one a year ago.
“These threats are designed to extract money or sensitive data. And with smaller but strategically targeted attacks the chances of tricking users are also increased.”
The fast-changing nature of threats is also confirmed by David Emm, senior technology consultant at anti-virus software developer Kaspersky Lab.
“Malicious programs are now routinely created to make money illegally,” he says.
“The nature of the threat – especially the motivation – has changed considerably in the past two years. We’ve seen a transition from computer vandalism to crime.”
More than 90% of IT managers surveyed by web security specialist Websense say their systems have been infected by spyware, and 80% say staff have been targets for phishing attempts to get identity details.
Phishing is now aimed at extracting information for big-time attacks on the organisation rather than on the individual. The changing nature and perception of the threats is reflected in the fact that 97% of IT managers are now confident to varying degrees in their defences against viruses; even so, 46% have been infected.
Protective measures are familiar to IT managers these days. They range from anti-virus and firewall software to deperimeterisation, the approach being worked on by the Jericho Forum.
This is a group of mainly big UK user companies, which aims to move in stages to the ultimate goal of authenticating users and data, encrypting all communication and moving intrusion detection from the network perimeter to application and user levels.
In between there are products for network monitoring and exception reporting; patch management software; penetration testing services, for testing both systems and user gullibility; products to block or monitor the use of portable plug-in storage devices, bearing in mind the balance between security and user productivity; and discussion of switching from desktop PCs back to the client-server idea of simpler devices connected to central machines or to blade servers locked away and managed centrally.
All this needs to be backed up by a clear and simple policy that is continually hammered home to staff – and enforced, say many experts. Enforcement might need to include restricting staff use of web-based e-mail services, instant messaging and removable storage; it might need disciplinary measures.
“The changing nature of attacks means it is now more important than ever to implement strict policy, with clear guidelines, and to educate users about safe computing and the risks involved,” says Cluley.
Wilson adds, “Our member surveys suggest that the big threats are not the well publicised ‘sexy’ things like hacking, but still mainly the same old internal goof-ups: users opening e-mail attachments, IT staff amending applications or applying patches that have unforeseen effects.”
There is another issue here, which further complicates risk assessment. Wilson speaks for several experts when he says, “One danger is that there is a shortage of hard data on the level of the various threats. The media and suppliers fill the gap with their own interpretations and their desire to sell products.”
Tom Newton, product manager at internet security specialist SmoothWall, says, “Are some risks getting exaggerated? Of course they are. Any risk that has some possible mitigation will get exaggerated by somebody who is selling access to that mitigation.
“When has risk not been exaggerated? Look at bird flu: we are all going to die. Year 2000? Terrorism? We all do it, and humans are notoriously poor at evaluating risk on anything more than a day-to-day basis. But just because a risk is exaggerated it does not mean it is not there. It is our job as security professionals to coldly assess risk and take appropriate action.”
But companies’ seeming reluctance to even start proper risk assessment is highlighted in no uncertain terms by Phil Cracknell, director of security consulting at IT services group Capgemini.
“Risk is currently handled in a haphazard and reactional way, with little planning for the bigger picture and for unified governance,” he says. “I have consulted and spoken to more than 200 companies on risk management in the past two years and found few that are adequately prepared.
“In most UK businesses risk is homeless. There is no structure in a business to support a unified risk programme, no one person responsible for all risks and certainly no strategic plans or budget. It all seems to be event-driven.
“In addition, risks are all too often seen as IT security issues, when many are down to business processes, people and non-IT issues.”
Business must certainly be the starting point for risk analysis, according to Hansford.
“The starting point must be business impact and business priorities, and then you work back from there,” he says.
“Is the issue system unavailability or loss of confidential data, for example? Then we can look at what the threats or vulnerabilities might be: are they viruses, hackers and so on? Then we can look at what we need to do, and set priorities.”
Like Wilson, Hansford can understand people being put off by the formal approaches to risk analysis, but he also agrees that they just have to get to grips with these methods.
“An established methodology or software tool like Cramm can look like using a sledgehammer to crack a nut, but when you understand it, you have a standard approach that you can share with others across the organisation,” says Hansford.
“You can all look at risk in the same way and use a common language. In addition, if you do not use a standard methodology or tool, others may not have confidence in the way you have done it, or the rigour of your process.
“A software tool, like any computer program, will take your input and provide output, including recommendations and suggested countermeasures – but although the established tools are good, they do not remove the need for you to gather the input and make the decisions from the output.”
Wilson agrees. “Risk analysis can highlight vulnerabilities, then you have to decide the priority, and even perhaps whether you can live with a particular one if it is negligible – but it is not for the IT or security person to make that decision,” he says.
“It is the duty of IT or security to make the business aware of a risk and the level of it, but it is the business person who has to decide whether or not to accept that risk.
“The danger of doing risk analysis without a methodology – as it has usually been done, if at all – is that you create hysteria and misinformation. There is enough of that already. You then lose internal credibility as the IT or security practitioner.”
Read article: Secure business: Knowing the dangers
Vote for your IT greats
Who have been the most influential people in IT in the past 40 years? The greatest organisations? The best hardware and software technologies? As part of Computer Weekly’s 40th anniversary celebrations, we are asking our readers who and what has really made a difference?
Vote now at: www.computerweekly.com/ITgreats