Opinion

Traffic management, not more bandwidth, is key to improving network performance

Steve_Broadhead_150

Towards the end of 2005 I discussed whether the development of grid computing and virtual servers merited the “killer application for 10 Gigabit Ethernet” award and came away largely unconvinced.

Although it can be argued that 10 Gigabit Ethernet has a role in a local, clustered environment for very high-performance computers, there is ample reason behind the thinking that, for most applications, Gigabit Ethernet is still more than enough.

At Broadband-Testing Labs we do a lot of Ethernet testing across all network layers, from two through seven. What we find in almost all the tests is that the general bandwidth issue is not an issue at all. Technical features such as quality of service and class of service do work but we find that they are unnecessary most of the time.

That is the case even when running real-time applications such as voice and video alongside very heavy loads of HTML-based, POP3 applications, which means that the network is being driven pretty hard.

The two problems we find regularly cannot be solved by moving to either Gigabit or 10 Gigabit Ethernet.The first problem is basically due to the nature of enterprise applications such as SAP and Oracle. These products were not originally designed to be bandwidth-efficient – something that has been proven over the past two to five years.

We know companies whose users are experiencing response times in excess of 20 seconds, from actually engaging a transaction to getting a validated response, working across a Wan or internet connection.

This is one of the pain points – at the network edge, particularly when you have extreme latencies on the internet. Having a 10 Gigabit local area network backbone does not help much there.

Trouble at the server

The second problem with deploying more bandwidth occurs  at the server. When you are bringing security into the equation, with users moving from, say, HTTP to HTTPS, this kills servers stone dead.

Basic testing shows that Secure Sockets Layer transactions have nearly 10 times the overhead at the server of regular HTTP sessions. So, when you start looking at things like the number of TCP  and user connections that are open, and then look at the CPU utilisation on the servers, you can quickly identify where the problem lies.

One answer – as we have seen from some of the suppliers we have worked with, such as Zeus and F5 Networks – is to offload as much of the workload as possible from the servers.

TCP/IP may be the de facto networking protocol, but that does not necessarily mean it is any good. In truth, this networking mainstay is alarmingly inefficient in its basic form, generating what can only be described as shedloads of requests and connections for any single application and single user.

Times this by hundreds or thousands of users and several applications and the result is a terrifyingly high number of TCP connections at any one point in time. You can almost see the servers shaking.

This we have also proven in the labs using our Spirent Avalanche test gear, which enables us to create these very scenarios – stacks of connections and server overload.

Alternative solution

So what do we do instead of putting fatter pipes into the network? Simple – we use a Layer 4-7 optimising, traffic management device, and server utilisation drops to a fraction of what it was previously, because the traffic management device takes the hit instead.

The bottom line is that the user gets a near instant response from the applications instead of being able to brew a mug of tea while waiting for the results to come back to their PC.

Forget figures like 100% improvement; think more like 3,000% improvement. And then of course there is the massive added benefit of avoiding the horrible task of upgrading servers, taking users, applications and services offline, and splashing out loads of cash to the likes of Sun, Dell or Hewlett-Packard for the privilege.

Instead, you spend what is likely to be a fraction of your planned server upgrade costs on one of these magic devices and a few minutes later your users are smiling again. I know it sounds coy and crass, but it really is that simple.

Broadband-Testing Labs

Steve Broadhead runs Broadband-Testing Labs, a spin-off from independent test organisation the NSS Group.

Author of DSL and Metro Ethernet reports, Broadhead is involved in several projects in the broadband, mobile, network management and wireless Lan areas, from product testing to service design and implementation.

 www.broadband-testing.co.uk

Email Alerts

Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

This was first published in January 2006

 

COMMENTS powered by Disqus  //  Commenting policy