Intelligent wires at war?

BRIEFING: With the onset of apps-on-tap and the growth of wide area networks, there is a danger that intelligent devices from...

BRIEFING: With the onset of apps-on-tap and the growth of wide area networks, there is a danger that intelligent devices from different suppliers start "competing"with each other. Steve Broadhead reports

Application service providers are offering to host increasingly large parts of your IT operation. Meanwhile multi-site networks are becoming ever more complex - and the devices that run them ever more intelligent.

But recent lab tests have identified a ghost in this ever more complex machine: the tendency for devices that control traffic flow and optimise the network actually to compete with each other (see below).

A survey by outsourcing specialist Milgo Solutions found that, of firms surveyed, 60% were already working with intranets, over 50% were investing heavily in remote access services and almost 40% were investing in managed Wide Area Network (Wan) services.

It is noticeable too that the Wan, rather than the Local Area Network (Lan), is becoming the primary area for concern for network strategies.

This is hardly surprising when you consider the interest in extranet and potential e-commerce activities, and marry that with the currently available Wan bandwidth that most companies have at their disposal.

While Lan performance has exploded, the Wan has effectively got slower, thanks to the amount of traffic the Internet and other services are creating. This is, therefore, a problem area for many companies at least in the short to mid-term, until the full capacity of all the fibre optic cable currently being laid across the world becomes available.

Added to Wan headaches, many users are also experimenting with application service provision.

But most users will not be aware of the sheer complexity of the network at the service-provider's end of the chain. This is the hardware and software required in order to support what are, in some cases, millions of online sessions a day.

On leaving an end-user's computer, a request to access a Web site on the Internet may pass through any or all of the following:

  • Backbone routers, en route to the ISP and the host Web site

  • A front-end router at the ISP itself and maybe several more internal routers

  • One or more firewalls

  • Packet-shaping devices

  • Web cache engines

  • Web/application content switches

  • Load balancing switches for firewalls, cache and servers

  • A fibre channel switch as part of the Storage Area Network (San) controlling the host-disc subsystems.

    A few years ago even basic networking - the means of connecting several computers together in a single, shared environment via the equivalent of a junction box and a few wires - was deemed a challenge. So it is not difficult to conceive of the complexities within a service provider's network environment.

    If it was simply a case of connecting device after device in daisy-chain fashion - a little like hanging several hard discs, CD ROM drives and tape back-up units off a single SCSI device controller - the complexity level would be low. In this instance, you are simply letting data pass through to the next device along the chain, without interruption.

    But in a contemporary service provider network environment life isn't so simple . This is largely due to the networking devices themselves having progressed from simple "black boxes" to "intelligent devices".

    Networking is never simple. Even when the "network" consists of a few dozen PCs connected to a stack of Ethernet hubs, there are potential complications. It may seem simple in principle but the reality is very different once you start to add in shared printers, scanners, remote access to the Internet, firewalls, networked fax and Voice-over-IP telephony. Then there are the common applications to worry about which add another layer of complexity to the network.

    Imagine when there are hundreds of thousands of users and it is easy to see the potential conflicts ASP could bring.

    The Internet, of course, is partly to blame for this growing complexity and has made companies look at wide area networking in a new light.

    The promise of low-cost services across the Internet such as Voice-over-IP telephony, remote access and management of office networks from almost anywhere in the world, together with the possibilities of e-commerce and e-business, have further magnified the "Internet effect". In many cases this has meant a radical reappraisal of the service provider networking strategies of two to three years ago, both in terms of basic network design and hardware, as well as applications and services being offered.

    The result is that service providers are being offered a new generation of intelligent networking devices, where the inherent software within the device is capable of making significant decisions about how it should handle traffic on the network.

    In a situation where one intelligent device, playing the role of traffic police, tells simple devices around it what to do, the potential problems are insignificant. The only issue here is ensuring that the intelligent device is set up correctly in the first place.

    However, if several devices on the network have artificial intelligence built-in - and are configured in isolation - they are as likely to fight with each other to take control of the network traffic as they are to complement each other to optimise network performance.

    And here is the irony: intelligent networking devices, such as load-balancing Ethernet switches, Web content switches, packet shapers and Web cache engines are all designed with one aim in mind: to optimise traffic flow and, therefore, performance across the network - independently of each other, that is.

    So when you get several devices all trying to be clever at the same time, there is more chance of them competing for the data than co-operating to optimise performance.

    Given that very few service provider networks consist of products from a single vendor, the issue of multi-vendor interoperability once again raises its ugly head. For example, a Web cache engine from one vendor may be intelligent in its own right and can therefore cache suitable data to speed up Web server access. But this "intelligence" does not extend to being aware of other "intelligent" devices, such as Web content and load-balancing switches, on the same network.

    The problem for all concerned is that innovations are occurring daily. Confusion arises as there are no set guidelines for optimising the contemporary, multi-vendor, multi-device networks that ISPs and ASPs must use.

    Phil Wainewright, managing editor at ASP media specialists, ASPnews.com, explains: "Maintaining high performance when hosting servers in an Internet data centre is still something of a black art.

    "We see a lot of vendors investing in helping ASPs and hosting providers set up their data centres simply so that they can understand the issues better and start to define some best practice guidelines."

    When digital brains start arguing

    The test labs of the independent product-testing group NSS have witnessed examples of "intelligent" devices from different vendors actively competing with each other, resulting in network performance actually slowing down. This only occurs when all of these devices are configured to enable their full feature-sets creating a situation where there is a strong likelihood of competition, rather co-operation, taking place.

    An actual example occurred with the ArrowPoint CS-800 - now the Cisco 11800 - content services switch and the CacheFlow CF110 Web cache engine. The primary task of the former is to intelligently direct requests for Web data - such as a home page on a Web site - to the appropriate Web server or cache engine, as efficiently as possible. In order to do this, it tries to grab every data packet that enters the network before any other device can get it.

    Unfortunately, this includes the Web cache engine which also tries to be the first device to grab any packet that enters the network, in case the request is already in cache and therefore doesn't need to be forwarded to any servers. The solution is to restrict the capabilities - or intelligence - of either the switch or the cache, in order for them to co-operate peacefully. In this particular instance, network performance was optimised by placing the switch ahead of the cache in the pecking order. But with other combinations of devices this wouldn't necessarily be the correct solution. What is clearly required here is the re-invention of something like the original Interoperability tests, to create optimal solutions on multi-supplier networks which can then be taken as blueprints by the service providers for network optimisation.

    Why outsourcing is breaking up

    The success of outsourcing has been questioned in the past - and rightly so. Towards the end of the 1980s, many large companies chose to outsource their entire IT function, partly as a result of having their fingers burnt when the economy collapsed. However, many returned to running their own networks after having their fingers burnt again. In many cases this was the result of handing over too much responsibility, for far too much money, to service providers that were simply incapable of doing the job. Big companies ended up with expensive outsourcing contracts and smaller companies, which could not afford to spend millions of pounds, were left with little or no support.

    Today, outsourcing contracts operate on a much smaller scale, appealing to the small to medium-sized enterprise market, where 95% of companies exist. It is no longer an all-or-nothing deal, and nor is it necessarily expensive.

    Many companies are afraid to give up complete control of their network but would like assistance in the day-to-day management of it, as well as expert but impartial advice on planning the way forward and acquisition policies.

    According to international analyst company the Gartner Group, selective outsourcing of business and IT functions will quickly become the norm, not the exception. In a recently presented five-year vision of the future of IT, Gartner claimed that there is a major need to change the way in which the IT function is traditionally handled within companies, moving towards greater reliance on external service providers and less permanent in-house staffing. However, according to analysts, this does not mean the end is in sight for the IT manager. While Gartner sees some companies completely outsourcing their IT functions, over 85% will maintain internal IT services, albeit changed in size and scope.

  • Read more on IT outsourcing

    Start the conversation

    Send me notifications when other members comment.

    Please create a username to comment.

    -ADS BY GOOGLE

    SearchCIO

    SearchSecurity

    SearchNetworking

    SearchDataCenter

    SearchDataManagement

    Close