Network grinding to a halt?

As company Intranets start to slow down, improving network performance becomes a necessity for the business - and for the IT...

As company Intranets start to slow down, improving network performance becomes a necessity for the business - and for the IT manager's job security.

In The Beginning...

The problem of increased network and Intranet traffic has grown considerably over the last decade. The majority of businesses bearing the weight of this networking explosion run on Ethernet-based hardware and use the IP protocol. The two main reasons for this problem given by many analysts, are the increased reliance on IT by society as a whole combined with the huge amount of data that is stored and searchable via the average Intranet. Companies archive everything via the network and make it a requirement that accurate data is available to all users at all times. Another more worrying aspect is the huge amount of diagnostic and control information generated by tools such as SNMP and RMON.

"We as an industry acknowledge that we have a problem but many are quite happy with the status quo," claims Phil Snell, founder of network management company Chevin and an outspoken critic of both SNMP and Ethernet. "Essentially, many of these expensive network management tools such as Tivoli, Unicenter and OpenView are causing many of the problems that they claim to fix."

Snell's plausible and much agreed-with theory is that the huge amounts of traffic generated by these management protocols actually causes such stress on performance that the benefits are often outweighed by the problems.

"Chevin has developed a vastly improved version of RMON, but we are not saying that this is the complete solution to the problem. We need to look at redefining how networks are managed. Businesses are still using technology designed 25 years ago and who in their right mind would want to use 25 year old technology today?"

Do We Have A Problem?

But even with the limitations of this old and arguably obsolete Ethernet technology, many business are unsure whether the sluggish performance of their network is due to poorly designed applications, insufficient bandwidth or misconfigured or faulty equipment. What is needed is a method of pinpointing the cause.

The first step is to create a full discovery of every device, server, communication link and application running across the Intranet. Many network management tools such as the aforementioned Tivoli, Unicenter and OpenView have modules to automatically perform 90% of this detailed discovery. However, you will often have to fine-tune these searches manually. Tools such as Microsoft Visio can help create a detailed graphical representation of the network topology.

Next you need to test the hardware. Fluke is one of the market leaders in network testing and their basic guidelines are sound: test the physical layer, the packet layer and the management protocols such as SNMP. Fluke's range of handheld testers provides these functions. Instead of obtuse error messages, the hardware and software elements of the modern network testing tools will provide detailed breakdowns of problems, performance and errors, and will often provide possible solutions via knowledge bases embedded within the system.

Finding out if the problem with network or Intranet is due to a poorly designed application or the result of a misconfigured switch or router is more difficult. The best place to start is by looking at the types of traffic generated by different types of users. There are many utilities such as OPNET Modeler and NetIQ Chariot that will allow you to monitor the output and network utilisation of both hardware and applications. These applications are also able to simulate changes without having to move hardware configurations.

After testing the network infrastructure, you should have an idea of where your problems lie. If the problem is simply a case of too much essential traffic over too little available bandwidth, before getting out the cheque book and upgrading to fibre or gigabit speeds, there are several "quick" fixes you can try.

The Quick Fix

The most common quick fix is to change how and when IP packets are delivered via the network. Under the common name of packet shaping, tools will allow a degree of control over bandwidth utilisation. For example, say a number of users on the network are downloading images or MP3 audio files that are swamping the network with erroneous traffic. The packet shapers can create bandwidth restrictions allocated per user or depending on the application requirements. Packet shaping hardware like the Packeteer 1500 and 2500 series sit on the LAN segment that connects to the WAN at the various locations which make up the intranet. These can be branch offices or departmental LANs and provide a graphical view and slider controls for bandwidth available for each user session, application or URL.

Another quick fix is to implement a QOS scheme across the network. Although Ethernet was initially created without any QOS, the newest 802.1p standard does provide a modicum of service features. The 1p suffix stands for prioritisation and allows the administrator to automatically tag different types of IP packets depending on priority. However, this is a less effective quick fix than packet shaping, and the terminology and types of QOS variations are made quite confusing by rival vendors. What prioritisation essentially means is that in busy times, high priority packets will get to their destination first, though they are not guaranteed to get there within a specific time.

If you are running an Intranet, the oldest and most effective method of improving performance is over-provisioning, especially in the WAN connections that make up the Intranet. A number of vendors are acting as bandwidth brokers and offer flexible upgrades and downsizing plans.

The Major Overhaul

If all else has failed then you may have to bite the bullet and go for a full upgrade. This is likely to include laying more cable, upgrading backbones and buying new servers.

"Making the decision to upgrade structured cabling is a scary one," says Jon Green, Marketing Director of ITT Industries (Network Services), one of the UK largest providers of structured cabling solutions. "But if you compare it to the front edge of IT where PCs are upgraded every two to three years and switches every three to four years, cabling is usually there for at least a decade."

Though the majority of the cabling installed in businesses is of the Category 5 variety and as such is happily able to run 100Mbits/second, the move to gigabit is another question. "Gigabit Ethernet was originally designed for Cat-5 but during its development it was found to create a great deal more cross talk - this was the reason the CAT-5e [enhanced] standard was developed," Green explains.

With Gigabit speeds on the horizon, many companies are pushing CAT-6 over CAT-5e. Green concedes: "I'll sell you a certified CAT-6 solution that will run Gigabit Ethernet but it would be a very brave person that would choose CAT-6 as the saviour of all problem in the long term. They are both copper solutions and neither has a viable upgrade path to 10gig speeds, even though CAT-6 is more expensive for the customer."

Although there are some vague plans for a 2.5 gig standard over CAT-6, these are far from being ratified. The up and coming wireless standards offer a potential upgrade path for companies unwilling to go to the expense of laying more cable, but these are still immature and need to increase in capacity before they challenge fibre.

"Customers need to look at the big picture," comments Green, whose customers include HP and Compaq. "If the five year plan is to only move to gigabit speeds across the Intranet than Cat-5e is fine. If the organisation sees a future with 10gig backbones, than the fibre that is laid today will support future gigabit speeds whereas copper - of any description - will not"

Another major infrastructure change is to improve the efficiency of the switching fabric, of which upgrading to advanced Layer-4 and 7 switching is the most popular method. The poor performance of the network may be due to excessive load on the application servers. By increasing the number of servers and balancing the load depending on priority of user or application, performance can often be improved. All the major networking players have switching solutions for these areas, under the name of traffic management or content switching.

If All Else Fails

Network performance management is a complex juggling act. As usage increases, something is bound to drop and without the proper allocation of budget, trying to solve the problem is difficult. If all else fails, the last option is to become draconian on users who waste your precious bandwidth. Consider the growing number of users surfing the Internet for MP3 files, games and email. Don't think about the moral implications but instead about the amount of bandwidth they're consuming. Although it won't make the IT manager popular, software to restrict users downloading junk, combined with locked down machines may ease the burden for a little longer.

Will Garside

Read more on Business applications

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

-ADS BY GOOGLE

SearchCIO

SearchSecurity

SearchNetworking

SearchDataCenter

SearchDataManagement

Close