Intelligent networks are capable of high-level data routing, but there is a reluctance by networking suppliers and their customers to embrace the technology, writes Philip Hunter.
Intelligence is a relative concept and, applied to data networks, it now means considerably more than it did a few years ago.
Then, it was all about making intelligent decisions about how to route data across a network from source to destination, taking account of variable factors such as traffic levels and availability of particular links.
Now, the definition is starting to encompass decisions taken at the application level, extending not just to the choice of route, but also to the destination, the timing of delivery, and even whether to allow the data through at all.
In other words, networks will no longer just provide intelligent plumbing, but start making higher-level policy decisions that, until now, have been the preserve of the end systems.
The idea is by no means universally welcomed. It seems attractive for an internal IT infrastructure, where the network can become a hub orchestrating the flow of data and transactions between end systems.
It is also potentially attractive within trading chains in business-to-business set-ups, where a trusted network can make access and routing decisions based on policies agreed by all parties.
But where the network extends fully into the public domain, beyond the reach and jurisdiction of internal security policies, enterprises are understandably more nervous.
"Businesses are not going to want to have much intelligence in the network that they do not understand or have control of," says IBM's regional senior software consultant, Kevin Malone.
Malone says for B2B services to be effective, intelligent routing between partners is needed at a high level. For example, if a retail consortium makes a joint purchase of goods to exploit the combined purchasing power of the members, there has to be a way of breaking this transaction down and routing the relevant business forms between the participating companies.
In fact, software exists that can perform such routing, but it is in middleware, such as IBM's Websphere, rather than lower-level network software. Such middleware generally resides within enterprises and is co-ordinated by a server that acts as a hub, orchestrating the flow of messages between systems involved in a given transaction.
This is the most efficient, scaleable and manageable way of implementing such high-level routing, and needs to be replicated in some way within external networks serving multiple enterprises within a trading community.
But, according to Malone, this is best done not by distributing intelligence into the network, but by having a secure site acting as a common hub, external to each of the participating companies. That way, the network continues to provide just the plumbing, although the hub site could be considered part of a higher-level network.
Unsurprisingly, suppliers of routers and switches, such as Cisco and 3Com, are not content merely to provide the plumbing, but want to move into increasingly intelligent web-based services for both B2B and the internet.
But middleware, unless it is integrated into standard browsers, will not provide a suitable high-level routing medium, because it will not be able to communicate with the client systems over the internet.
So the remedy will have to come from standard IP-related protocols such as HTTP. Recognising this, IBM has entered the race to develop new versions of existing protocols that can provide the resilience and guaranteed delivery required for future e-commerce transactions. "We are taking some of the reliability of MQ Series [IBM's middleware messaging software] and putting it into HTTP," says Malone.
Even this, however, is not taking network intelligence beyond the level of routing and trusted delivery. The real question is whether it will become acceptable for external networks to open up data packets and inspect the contents of the payload in order to make routing decisions.
It is like suggesting the Royal Mail should open letters to decide whether the correspondent has put the right address on, or whether the content is suitable for the stated recipient.
It is easy to see why networking suppliers, while unwilling to cede ground to middleware suppliers such as IBM, are cautious in their assessment of how far enterprises are prepared to go.
"At present, we are talking about the ability to distinguish traffic based on Layer 3 and Layer 4 protocols," says Ilias Kolovos, 3Com's product manager for Gigabit Layer 3 products.
This means ensuring that all applications, including voice and video as well as data, can operate on an end-to-end basis with appropriate quality of service.
Many switches and routers are now capable of distinguishing between IP packets on the basis of information contained in their header. But, as Kolovos says, it is still difficult to provide uniform quality of service on an end-to-end basis across a whole network, including the Lan, access layer and core.
3Com has been enhancing its routers and switches so they can be configured from a central point to deliver consistent quality of service across a whole network.
"In a traditional network, if you wanted to prioritise, for example, Lotus Notes traffic and block gaming traffic, you would have to manually configure ports on each switch," says Kolovos. "Now we can apply these policies and rules network-wide."
To an extent, such facilities provide higher-level network intelligence, since they permit routing decisions to be made on the basis of applications rather than just prevailing network conditions. But it is still a fairly rudimentary level of intelligence.
Cisco is also aiming to deliver consistent quality of service. Last month, it released its AutoQOS software, which enables quality of service rules to be implemented without requiring much technical expertise. This is aimed at voice over IP, which is otherwise difficult to support consistently, and smaller enterprises, where technical skills are scarce.
Another target for Cisco is the fast-growing storage area network market, where the Fibre Channel protocol rather than IP/Ethernet has, until now, provided the underlying data transmission.
Cisco believes the time is right to bring the San into the IP arena alongside voice, to provide a common network infrastructure for all electronic traffic.
"Storage will just be another IP application," says Bernard Zeutzius, Cisco's regional product manager in charge of storage networking. This will make storage networks more intelligent, because they will be able to exploit the security, filtering and quality of service mechanisms available for IP.
So, despite much rhetoric about intelligent networks and application level routing, it seems that neither IP networking suppliers nor their customers are ready for networks to start opening packets and peering at the payload. For now, the focus is on delivering consistent quality of service across a large IP network - something that remains problematic, especially when it involves routers and switches from more than one supplier.
What are intelligent networks?
Intelligent networks decide not only how to route data across a network, they also decide on the destination, the timing of the delivery, and even whether to allow the data through at all
Switches and routers are capable of distinguishing between IP packets on the basis of information contained in headers but it is still difficult to provide uniform quality of service
Bringing the storage area network into the IP arena will make storage networks more intelligent because they will be able to exploit the security, filtering and quality of service mechanisms already available for IP.