Ant hills are an unlikely source of inspiration, but some large organisations are developing biologically-based techniques in the drive to deliver a competitive edge to IT systems in cut-throat markets. Danny Bradbury reports on how nature can supply some new answers to old problems
You might wonder what IT could gain from emulating nature. Yet large corporations in the UK are already investing in research into biological computing to enhance their IT systems. Royal Mail is looking at immune systems as a way to highlight anomalies and detect fraud, while BT sees studying the behaviour of ants as a source of routing algorithms.
Using biological characteristics found in the natural world as a means of enhancing software has existed as a concept since the 1940s, when neural networks were proposed as the basis for artificial intelligence. It is only in the last few years, however, that links between algorithms found in nature and computer software have fuelled commercial interest.
Most of these techniques exhibit similar traits at an abstract level, explains Ken Lodding, a scientist who is designing biological algorithms for fault-tolerant computing at Nasa. "In certain cases you need a mathematical calculation to the nth degree, but when you are doing searches or certain other actions you basically need a good enough answer," he says. "That is what we do as human beings, and that is what I believe will come out of biological models. They are not hard-core logic."
Instead, the logic in biological computing systems is decidedly fuzzy.
Lodding hopes that one day he will be able to apply his software to the next generation of Mars rovers, to avoid events such as the malfunction of the Java-based operating software on the Pathfinder Mars rover. Debugging a program on a machine 35 million miles away can cause major headaches. Lodding wants to build machines that can correct their own faults, by dividing functions into cellular software components which can take on certain functions, guided by in-built software "genomes".
The concept is similar to the development of an embryo, explains Lodding. "You start with a stem cell and then as the body grows your cells take on different tasks because they have the appropriate gene structure. We are emulating that in the software," he says.
If one or more cells responsible for a particular function fail, others can take over, making the machine more autonomous.
This approach resonates with developmental algorithms, which are Richard Tateson's core area. Tateson, a senior researcher in BT's Pervasive ICT Research laboratory and a doctor in developmental biology, says biological computing is finding real-world applications in commercial fields.
Ant algorithms are particularly promising, he says. Also known as stigmergy, it works by letting agents affect their environment rather than communicating with each other directly. "Real ants leave a pheromone trail that can be followed by others. The more ants that follow it, the stronger the trail. They head directly back to the nest when they find it so the trail gets stronger," he explains. "This is very interesting to telecoms companies because of the path-finding and routing algorithms that you can apply them to."
Likewise, evolutionary computing, at the simplest level, attempts to solve problems by testing many randomly-generated solutions and scoring them based on their success. The solutions that get the best scores stay in the game while those that fail are killed off, in a process similar to Darwin's natural selection. Solutions that survive are bred together, creating newer, stronger solutions, until a handful of final candidates remain. In this way, it progresses to quality output.
This, along with other biological algorithms, lies at the heart of Galapagos, the aptly-named optimisation tool from UK-based start-up CodeFarm. Managing director Jeremy Mabbitt explains that the product is divided into two parts: a Java 2 Enterprise Edition-based server set up to optimise a grid of client PCs; and a workbench product designed to create evaluation scenarios. Investment house KBC Financial Products has been using the evolutionary algorithm software to evaluate potential portfolio mixes of convertible and derivative financial products. Previously it used a manual approach in which combinations of numbers were crunched in an Excel spreadsheet.
Another promising area in which biological computing techniques are being applied is in the detection of fraud. Visa has been using neural networks to identify credit card fraud since 1994, when it created its Cardholder Risk Identification Service. This service was replaced late last year with the Falcon neural networking product from US company Fair Isaac, which offered a better neural networking model tailored for credit card fraud. Rebranded as Visor, the product is already proving three-and-a-half times more effective than its predecessor, says the credit card company.
Robert Littas, senior vice-president in fraud management for Visa, explains that neural networks train themselves using high-volume data to enhance accuracy, but it can only ever be used as an indicator for further investigation. "For us, what is important is to identify a particular transaction. The neural network will not tell you that something is absolutely a fraud," he says. "Instead, it scores an input, and the higher the number the greater the chance. The banks receive the score and then take appropriate action."
Royal Mail has taken a different approach to fraud detection, concentrating on a much newer biological computing theory called computational immunology pioneered by Stephanie Forrest, a professor in computer science at the University of New Mexico. This uses software to mimic the characteristics of the human immune system. Mary Wilde, a security researcher at Royal Mail, has applied the techniques to sifting through the large amounts of transactional data from branch post offices to highlight anomalies using the software equivalent of white blood cells.
Richard Overill, senior lecturer in computer science at King's College London, was heavily involved in building the system. The underlying concept focuses on the idea of self, he explains. The system has to learn what "self" is by analysing data that is known to be non-fraudulent. When a good stable representation of self is achieved, you can start introducing transactional data with a mixture of good and bad behaviour, and hope that the software agents programmed into the system will recognise that some of these events are not in its self profile.
If anything, the trial of the system produced too many results, says Wilde, and many of them are false positives. "Translating them into business-speak and business transactions is tricky. We have to do a lot of work on assessing the results so that the interesting ones float to the top."
Nevertheless, Overill eschewed the neural network concept in favour of immunology because he wanted more detail in his results. "We know that banks detect fraud with neural networks but they don't explain to you which transaction it was or what was wrong with it," he explains. "Neural networks are no good for auditing and forensic analysis."
The biggest tension for IT professionals lies in moving from formal computing, in which the steps leading up to a result are intimately prescribed and understood, to biological algorithms that work in the realm of logical "fuzziness" and probabilities. Lodding dreams of a time when tiny computational cells built into the wing of an aeroplane can make localised decisions using developmental biological algorithms.
Nature's role in the world of IT may be relatively limited now but the commercial world is always looking for something new to give it the competitive edge.
Autonomic computing - learning from the body to self-heal
IBM's autonomic computing initiative is still in its early stages, but it takes some ideas from the natural world and puts them into hardware and software form. Drawing on the idea of the central nervous system, which passes data about the condition of the body to the brain, autonomic systems will be self-healing, self-optimising and self-protecting, say IBM researchers. But it is unlikely that autonomic computing will do away with the need for human intervention. "The autonomic idea comes from the body. There are certain things the body can take care of itself, but you have to go to the doctor for other things," says Adel Fahmy, programme director for autonomic computing core technologies at IBM.
Hackers find a natural source for viruses
Viruses and worms are at the centre of the map when it comes to biological algorithms. The concept of reproduction and replication that mirrors biological viruses has been a mainstay of computer viruses since the first one appeared for the Apple II in 1982. Since then, virus writers have embraced the biological metaphor with increasing enthusiasm, creating polymorphic viruses that mutate their own code. Pete Simpson, threatlab manager of content security company ClearSwift, draws attention to Serotonin, a type of worm written by a Czech virus writer in late 2002 which uses genetic techniques. The worm uses a theoretical peer-to-peer network called "wormnet" to exchange code with other copies of itself online, mutating pairs of code into new, single forms. The worm, which is currently available as source code but has not yet been deployed in the wild, also evolves in new environments. Copies of itself die out if their mutated code cannot spread, leaving survivors to replicate themselves onto new networks.
This was first published in July 2004