Enterprise networks contain vulnerabilities and are likely to do so until the economics of system design and implementation change dramatically. The best defence against the exploitation of these vulnerabilities currently involves using reactive measures against viruses and spyware, and using intrusion prevention systems such as firewalls.
These measures are reactive in that prior knowledge of an attack's characteristics is required before a threat signature can be developed. The time required to get a sample of the attack to the security supplier, through analysis and distributed to the clients is much longer than it potentially takes for the malware to spread, which can cause considerable damage.
What is needed is a more systematic and immediate way of creating threat signatures and deploying them to where they are needed as quickly as possible. The cure must spread faster than the disease. The convergence of various security technologies can help IT professionals to achieve this goal.
Taking inspiration from nature
Many of the developments, both good and bad, in IT have been inspired by biological systems like the human immune system. The immune system has inspired many defence systems, particularly in the field of intrusion and virus detection. In particular, the self/non-self (SNS) detection mechanism used by mammals' immune systems has been used as a way of thinking about computer viruses.
The SNS model relies on the ability to differentiate between an entity's own proteins and foreign proteins, as well as the ability to establish a memory of past infections.
In order to function, SNS requires stimulation and co-stimulation between B-cells, which destroy threats T-cells, which among other things facilitate communication and APC-cells, which identify the original threat. In practice, this model does not explain all observed phenomena in immunology and has not succeeded as an intrusion detection defence.
An alternative model has been proposed by scientist Polly Matzinger called the "danger model". It departs in one significant way from classical immunology in that it does not rely on SNS to find foreign entities. Instead, it relies on danger signals from injured cells for the APC cells to activate the T-cells and thereby the appropriate B-cells that eliminate the antigen.
For IT professionals, the lesson to be learnt from this model is that it is potentially possible to identify a threat through co-stimulation of a signal that identifies the threat as dangerous.
What is needed for IT threat detection is a number of well-deﬁned danger signals that together confirm a valid threat. The resulting composite alert could then be used to stimulate all components of the defence network.
Because the danger model allows users to have greater confidence in their identification of an attack, it becomes possible to derive signatures automatically from the system. These signatures are then spread out from the originator to neighbouring systems.
The sensors that use these signatures can dismiss unused ones over time, for example, because that type of sensor never sees that sort of traffic. Keeping old signatures active too long may degrade sensor performance or cause false positives.
Collectively, however, the entire network must be able to maintain a complete set of signatures. This is close to the biological model, where fewer and fewer T-cells are available to detect a long-gone threat, but always remain in minute quantities and quickly replicate if stimulated by a recurrence of that threat.
Agents in the system process and disseminate the signatures, but a top-down structure is avoided. Instead, the signatures spread in a peer-to-peer fashion.
In a practical implementation of this model of threat detection, various types of monitors are deployed. Program filters such as TCPWrapper are employed to screen the input to programs. These can be used for signature or behaviour-based scanning or even filtering.
Special application monitors are used to monitor programs, look for signs of distress and extract signatures of the attack when one has been detected. These are the danger sensors and are the cornerstone of the danger model approach.
An example of this sort of sensor is the crash dump that occurs when a program crashes due to a buggy attack - where a hacker's failed attempts to break in can provide an early warning of an imminent wider attack. An even better approach is to use a normal resource-usage profile to determine when a program is misbehaving. So long as this and the former sensor complement each other, the combined signal will be useful.
Network monitors, such as a network intrusion detection system, can be used as alternatives or as an augmentation to the other sensors. To be able to prevent known attacks from reaching their targets it is possible to employ conventional systems at the router and firewall level.
The infrastructure of the defence network is based on spreading the various kinds of alerts throughout the network and the capacity to maintain the complete set of attack signatures. Because the danger model eschews a centralised design, the distribution must happen in a bottom-up way but must still reach all nodes.
The larger and more complex networks of the future are going to require that infrastructures are self-organising. Moreover, a central authority can be the weakest link in an attack if it is cut off from the network it is supposed to be administrating.
The solution to this problem could be as simple as using subnet broadcasting or a more structured approach such as zero-configuration networking. The components must all be in a trust relationship with one another and communication must be secured by encrypting the signals with either shared keys or the destination's public key. The infrastructure is probably the most daunting challenge of the danger model approach.
The memory of immunology must collectively stay intact, but that does not mean that detection signatures need to be kept in their original form techniques such as the Teiresias algorithm have been created for consolidating information about threats. In cases where signatures are causing false positives they can also be hand pruned from the collection by sending an anti-signature through the network.
Some nodes may decide to "grandfather out" - meaning delete - older signatures. If an old threat pattern re-emerges, it may later become necessary to re-enable old signatures by having any monitor that sees an attack pattern report this throughout the network by re-sending the signature and thereby reactivating it in all the nodes.
The path to automated security
It is expected that the current trend of ever-larger and more complex networks, coupled with a steady flow of vulnerabilities, will eventually cause completely centralised security management to be given up in favour of a more autonomous security system.
Luckily, anti-virus technology and, in a more limited fashion, intrusion detection system technologies, are starting to provide us with more generic, automated detection methodology, and this trend is likely to continue. Many recent market entries have not had the luxury of starting with a large corpus of malware to test, and have instead concentrated on generic methods that detect evidence of attacks instead of specific known threats.
What is therefore missing is collaboration between the various tools that are emerging. For this reason, it is unfortunate that the industry is fragmented, and the few companies that have most of the pieces of the puzzle do not integrate them into a single, collaborative system.
To reach the goal of automatic security, the industry needs the convergence of all defensive technologies so that the whole system becomes greater than the sum of its parts.