Why insurers won't gamble on infosecurity cover

News

Why insurers won't gamble on infosecurity cover

Danny Bradbury

Infosecurity failures can cost millions, but many insurers are reluctant to take the risk, says Danny Bradbury

Cars are often used as an analogy for the computing industry. If your car arrived without locks, or was shipped with an airbag that stopped working after 90 days unless you paid a subscription, the industry would be in uproar. And yet computer users gracefully accept similar conditions all the time.

But there is another way in which cars are different from computers - it is compulsory to insure the former against harming someone, but difficult to insure the latter. And yet, when computer security is breached, many people can get hurt, including the companies that operate the system, their shareholders and customers.

"The industry is well aware of the fact that it has to provide insurance in this domain, but there are few companies willing to stick their neck out," says Hugh Penri-Williams, who worked in the insurance industry for 15 years and is newly appointed senior security adviser to Accenture in France, having previously been chief infosecurity officer at Alcatel. But why is there such reluctance?

Lack of drive

"Insurance companies run into a lack of data," says Hemantha Herath, associate professor in the department of accounting at Brock University, St Catherines, Canada. "We've been driving cars for a century, but using computers in anger for just a few decades. Actuaries have collected substantial data on criteria such as age, geography and occupation that can be applied to other domains, where threats change relatively slowly. Not so with computers, where things move blindingly quickly, and where nothing is discrete. It's difficult to do, especially when there are a lot of network connections and it is all interrelated."

We should not underestimate the interdependence of modern computer systems when trying to assess the economic cost of securing them - or failing to do so, says David Lacey, founder of the Jericho Forum and former director of information security at Britain's Royal Mail. "Originally, all the computer systems were separate. They weren't connected with networks. So you did a risk assessment for each one and decided the security measures." Then, he says, everyone connected to the same network, and some form of standardisation was needed: "At that point, we came out with BS7799."

Now things are changing yet again. Cyber attacks are becoming more sophisticated, says Lacey. Malicious parties know what they want - your customers' credit card details, the blueprint for your next product, or the chemical compound for the drug you are patenting. "So, on top of the general controls, you need this ring-fence around your specific data," he adds. That means understanding what that data is, where it is, and all the possible routes to get to it. Suddenly, risk evaluation and impact assessment become more difficult, even for companies trying to budget for security. No wonder insurance companies are nervous.

As companies struggle to understand these parameters, the goalposts continue to move. "Even if you feel you've covered all the bases, companies are so locked into their suppliers, their customers and others, that it is extremely difficult to predict what is going to happen on that front," says Penri-Williams. Companies buy their software from third parties, some of which test their software more thoroughly and patch more frequently than others. Telecommunications companies run companies' networks for them. Key processes are smeared across yet more systems, operated by a whole supply chain of outsourcing providers.

And the ability to farm out processes is increasing, thanks to growing standardisation of all layers of the networking stack, from IP through to XML-based web services. A cynic might long for the simple days when you bought your computers and software from just one mainframe supplier.

Random numbers

Computing systems are becoming increasingly transparent, but the departments that use them retain a depressing opacity that can further hinder the evaluation of security budgets. The chief security officer doesn't have all the answers, says Barry Horowitz, professor of systems and information engineering at the University of Virginia and former chief executive of the Mitre Corporation, a US government-funded technology researcher. To evaluate the level of security investment required, a company has to make assumptions and then tie values to them, he says. Only then can rational decisions be made. Otherwise, policy makers are simply whistling in the dark. The problem is that such assumptions are scattered around the organisation.

Getting at the required data can be a challenge, says Lacey. "To manage security, you'd need an intelligence system of your own," he adds. "Do you know what a particular department does when a laptop is lost? How many are they losing a month? Without knowing, say, what make of car their salespeople are using, you may not be able to research the fact, discussed on enthusiast bulletin boards, that thieves have found and exploited a vulnerability in that car's central locking system. Without that data, evaluating the cost of fixing the problem becomes difficult."

Horowitz says he has "never been at a meeting where the lawyer, the project manager, the money maker, the R&D investment group and the cyber security guy are all present". His approach involves collaborative web tools that enable nominated people from each department to gather and input that information more quickly. In a world where threats evolve rapidly, that is an important factor, he argues.

If a company's assumptions and the values attached to them are available, it is in a reasonable position to document its risks, and budget for the most critical requirements, says Penri-Williams. "You need to do it with a particular methodology," he adds. Several such methodologies exist.

Carnegie Mellon University's infosecurity-focused CERT programme provides Octave, which folds organisational and technological risks together. The Information Security Forum offers the Information Risk Analysis Methodologies (IRAM) system, which uses three phases - business impact assessment, threat and vulnerability assessment, and control selection.

Such methodologies may tell you where to direct most of your budget, but may not tell you how much to spend. For that, companies must assess the potential losses arising from a security breach. "The thing to do is not to listen to the guff handed down about metrics," says Lacey. "Not everything is measurable. You don't know what the damage is. You can't see what the customers are thinking."

Issues include risk to reputation, cost of dealing with disgruntled customers, potential lawsuits and technical remediation. And in some cases, costs are not linear, Lacey points out - the cost of replacing the thousandth stolen customer credit card record because you didn't encrypt your data may not cost the same as replacing the first.

At a premium

But companies still have to write some numbers down and, clearly, that is being done. Insurer Chubb's US arm, for example, offers a specialist cyber-security insurance policy for financial companies, covering six main areas - electronic theft, denial or impairment of e-service, loss of data during electronic communication, electronic vandalism, loss of revenue due to electronic extortion, and loss of revenue through fraudulent use of electronic signatures. It also covers data breach for e-commerce providers under its Safety'Net internet liability policy.

But today, the numbers used to back up online policies can be vague, compared with the numbers that actuaries play with in other sectors. Even those insurance firms that do tackle the problem do so in relatively simple ways. Safe Online, which brokers cyber-security risk on its own and via Lloyd's syndicates, divides customers into five categories. Large financial institutions are in the riskiest category, whereas a shop selling baked goods online as a small venture would be down the scale. Safe Online sends in auditors to evaluate security preparedness, says partner Chris Cotterell, and considers risks such as likely fines from regulators and customer notification costs. Then it makes its calculations and presents the premium.

And what of those tricky questions about interdependency - and the challenge of considering a loosely coupled but intricately connected set of technical elements spanning many different systems, both inside and outside the target company? "I think we take it as read that these things happen," says Cotterell. "We can't really put that into the mix of underwriting because then we'd never insure anyone."

Safe Online assumes, for example, that there will be a certain time-lag between a vulnerability appearing and a patch being released by the supplier, and accepts it as a risk of doing business. "We've probably got the rates right, because at the moment the underwriters aren't losing money," says Cotterell.

The methods for assessing risks and potential premiums in cyber-security are still relatively immature, but people are working on the problem. Herath and his wife and fellow academic, Tejaswini, took known information about security events and losses from ICSA Labs and analysed them using a statistical function known as a copula, which joins complex multi-variable systems into one-dimensional distribution functions. He applied this to insurance pricing for cyber-security in a complex statistical paper, and came up with some numbers. This actuarial approach appears to be the closest thing the industry has to a formal methodology for calculating insurance premiums for cyber-security risk, but Herath says it relies on empirical data that is still scant and difficult to verify.

A difficult operation

While efforts continue to refine the project cost of a security breach, the cost of preventing it also varies. There are different ways of solving the problem, which cost different amounts and could come from different budget lines. Is security risk a matter of capital expenditure or operational refinement?

Ross Anderson, professor of security engineering at Cambridge University's computer laboratory, puts it succinctly: "Usually, the best course of action involves effort, such as patching properly or training staff not to click on links.

"However, this is difficult for security managers because it means bothering people and so undermining their own career prospects. It's a lot easier to just buy a firewall, declare the problem solved, and hope for the best."

It is likely that many companies will shamble along, blissfully unaware of the underlying complexities, and spend as much on security as their annual budget allows. But those worrying about this intricate problem may have more sleepless nights ahead of them. "It's becoming less likely that insurance can be more a part of the action than it has been," says Horowitz.

Random security events are difficult enough to deal with, but cyber-attacks are becoming more organised and focused. The days of mass-mailed 'Iloveyou'-type malware are ending. Now criminals know what they want, who from, and are more intent on getting it. And, as has often been said when discussing rogue attacks: the defence has to win every time, but the attacker has to win just once.

In such circumstances, Horowitz worries that insurance companies will feel even less inclined to take the risk. "Who ever insured warfare?" he asks.

This article first appeared in Infosecurity magazine


Email Alerts

Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
 

COMMENTS powered by Disqus  //  Commenting policy