The evidence fails to support the doom-mongers
Hyperbole surrounds the threat of cyber terrorism. Since the September 2001 terrorist attacks in the US we have been warned of cyber attacks against power systems, air traffic control, emergency services, banking and communications. But what evidence have we seen of cyber terror to date?
Western civilisation depends on critical national infrastructures. Most of the infrastructure is privately-owned and the profit motive has led to degraded security with the introduction of internet-capable supervisory control and data acquisition (Scada) systems. Moreover, the internet offers a cloak of anonymity and global reach.
Internet worms have caused occasional network problems. In January 2003, the SQL Slammer worm disrupted the Bank of America's ATM network. This was an unpredictable side-effect and certainly not the design of the worm's author.
Numerous "script kiddies" find recreation in defacing websites. Some disingenuously label these acts of cyber terror. Cyber graffiti would be a more accurate description. The nearest thing to cyber terror involved the March 2000 case of Vitek Boden, who infiltrated the systems of a sewage treatment plant in Australia. His attacks resulted in the release of an estimated 265,000 gallons of untreated sewage into local water courses. He was fined the clean-up costs of £5,340.
An act of revenge
Boden had been employed by the company that installed the control network and his laptop contained a software application needed to access the system. The motive was revenge for rejection of his job application by the local authority in Queensland.
Despite the heightened tensions following the occupation of Iraq, we have seen not one single act of cyber terrorism. Three reasons account for this. It is technically very difficult to achieve.Ê Scada networks are not exposed to the internet. With a correctly configured firewall, access is restricted to a fixed number of points within the company's internal network.
Infrastructure systems, designed to survive errors and natural disasters, have manual and, in some instances, mechanical overrides. Finally, as the Boden case illustrates, insider knowledge of and access to the target's internal network and understanding of the application software are prerequisites.
But it may be shocking to learn what many security professionals acknowledge in muted tones. The physical security of critical infrastructure systems poses a real and often undefended risk. An over-simplified scenario will illustrate the point.
Perils of the pickaxe
Terrorists use shovels to dig down several feet to an OC-192 (a 10 gigabit fibre optic cable) carrying simultaneously millions of internet sessions, phone connections, e-mails and financial transactions. Pick-axes slice through cables at key transcontinental points (their locations are in the public domain). Electromagnetic pulse bombs, costing a few hundred pounds, take out most of the 13 root DNS servers and backbone routers. Economic chaos ensues, as trillions of pounds a day in financial transactions grind to a halt.
Although military communications networks are hardened, civilian networks are largely undefended. Governments know this and so do terrorists.ÊAcross the board critical infrastructures were designed to cope with human error and natural disaster: not assaults by terrorists on their physical security.
So why have we not witnessed a massively damaging physical attack against communications infrastructures? The answers to this conundrum lie beyond the scope of this article.
Risks associated with the threat of cyber terrorism areÊreal but remote in comparison to the risks associated with breaches of physical security. Those who talk about the imminent threat of cyber terrorism are at best ill-informed and at worst intellectually dishonest.
Furthermore, I would question their agendas for inciting fear of a largely synthetic form of terrorism. A successful cyber terror incident is highly improbable. If such an event does occur, it will require insider knowledge and collusion, and its effect will likely be mitigated by inbuilt failsafe measures.
Pete Simpson is Threat Lab manager at security software supplier Clearswift
This was first published in April 2005