Spectral-Design - Fotolia

What’s slowing down your network and how to fix it

When using the network feels like wading through molasses, finding the cause can be a difficult process

This article can also be found in the Premium Editorial Download: Computer Weekly: What's slowing down your network and how to fix it

It is all too easy to think that when the network becomes increasingly sluggish, that an infrastructure upgrade is needed to maintain speeds acceptable to your users.

More often than not, the problem is not that your users spend too much time downloading cat videos, but more likely that there are serious bottlenecks in the network that can and should be dealt with before bringing out the chequebook for equipment.

Consistent slowness in the network is difficult to pinpoint and sometimes more than one problem may be occurring at the same time. It is important to start looking at a few likely suspects.

The all-too-obvious answer is to see bandwidth as the problem, but with investigation, it is often not within a LAN environment, where a high amount of bandwidth is available. More likely, the problem lies within the WAN, where capacity is more finite and expensive.

Problems with slow networks in a WAN environment are more likely to result from not employing quality-of-service software, according to Jason Peach, principal consultant at Networks .

“Rather than throwing more bandwidth at the problem, using more intelligent analysis to optimise bandwidth is often a better way to solve a bandwidth contention – the problem in any network scenario – LAN, WAN or WLAN, for example,” he says.

End-to-end latency (the delay that happens to a packet end to end from the PC to the server) and any errors causing re-transmission on the network will also degrade application performance and slow the network.  

Read more about network performance management

  • It is important to design a strong private cloud network, but that doesn't promise high performance. How can cloud admins maintain high QoS?
  • Software-defined networks could be what enterprises need to ensure the performance and reliability of their UC applications and services.
  • Resource-challenged small and medium-sized businesses need special tools to help them oversee their networks. What should you look for?

“An incorrectly configured network or network port can also affect throughput efficiency of a particular network path, affecting users,” says Peach.

Don Thomas Jacob, head “geek” at SolarWinds, says a robust network-monitoring tool will check the health and status of network devices. “When monitoring your routers and switches with simple network management protocol (SNMP), you achieve visibility on route flaps, packet loss, an increase in round-trip time (RTT) and latency,” he says. “It also provides lots of other useful information, such as letting you know if the device CPU or utilisation is too high.”

Packet loss

James Meek, lead delivery solutions manager at T-Systems, says poor network performance is characterised by packet loss, which can be measured in a number of ways by using a number of different tools. “Having determined that packet loss is occurring, it is necessary to understand whether this is due to a lack of buffering when traffic bursts occur, a poor queuing strategy or a lack of bandwidth,” he says.

While on multiprotocol label switching (MPLS) networks, it is possible to prioritise important traffic, such as voice and video, over less important traffic such as web browsing and network backups. Meek says that even with IP QoS, it is still necessary to provision sufficient network bandwidth to avoid congestion.

When networks collide

Internet connections are increasingly becoming performance bottlenecks for organisations, mainly because of bandwidth that is not controlled properly, particularly with the increase in video streaming.

Peach says: “Given that most organisations have traditionally employed centrally located ISP services and firewall boundaries, perhaps a more distributed ISP branch-level set of ISP internet connections would be a possible solution for improving internet access generally for all users.”

Perhaps a more distributed ISP branch-level set of ISP internet connections would be a possible solution for improving internet access generally for all users

Jason Peach, Networks

With more branch-level internet connections, internet traffic itself does not need to go directly across the corporate WAN, competing with corporate traffic on the same MPLS link, says Peach.

“However, this throws up the problem of managing all the different network connections and ensuring that the right amount of firewalls, email and web security is in place.”

This approach lends itself better to hosted cloud services for user device-based web and email filtering, says Peach. “Even though it is expensive, a cloud-based web-filtering solution could work out more cost-effective than simply employing additional corporate-grade WAN bandwidth to tackle this problem.”

Another common bottleneck is between the wireless LAN controllers and the core network. “This issue is likely to become more prevalent as customers move towards the latest 802.11ac wireless network implementations,” says Peach.

Bad

Mitch Auster, senior director at Ciena Agility, says most network bottlenecks come about because of ineffective and forecasting and that this is a particular challenge when a wide variety of specialised, high-touch (Layer 3 to 7) equipment is deployed deep in the metro network.

“Since this equipment is expensive, operators try not to over-provision it,” he says. “However, as users become more mobile and services become more on-demand, the likelihood of spikes in resource demand and bottlenecks grows.”

From a network design standpoint, it is recommended to locate these functions in a small number of larger, metro/regionally-centralised datacentres, and using low-cost-per-bit, efficient and easily reconfigurable packet-optical transport to aggregate and express the traffic between end-users and these content centres.

As users become more mobile and services become more on-demand, the likelihood of spikes in resource demand and bottlenecks grows

Mitch Auster, Ciena Agility

“This enables economies of scale and the L3-7 resource pool can be sized for the aggregate peak demand, as opposed to each far-flung L3-7 device having to be individually sized for the local peak demand,” adds Auster.

If bad can cause bottlenecks, then perhaps it is up to network architects to design out such issues.

Matt Walmsley, senior manager at Emulex, says that during architecting cycles, it is prudent to for increasing traffic levels and diversity of applications and services.

“This will inform the selection of high-speed intra-network node links (for example, 10GbE, 40GbE, 100GbE) and server-network links (10GbE, 40GbE), or at least allow for their later inclusion as bandwidth upgrades,” he says.

Walmsley says traditional network architectures can also take advantage of various technologies to build high-performance and load-sharing (or even load-balanced) links that provide reliable, high-performance routing of traffic and help to shape and prioritise different traffic types over the network.

Does virtualising the network help or hinder?

Stuart Greenslade, EU networking sales director at Avaya, says that for anyone running legacy networks, it might be time to consider virtualisation, SDN or fabric technology, which will give the network much greater simplicity and flexibility and thus reduce the likelihood of network bottlenecks.

“For example, many network professionals virtualise the network to create fewer operational overheads and add more functionality, which is a win-win for the company and its users,” he says. “Cutting the number of moving parts in the network enables service agility and allows the opportunity for a faster response time for applications.”

But SDN, network functions virtualisation (NFV) and the general move to virtualised infrastructure may hinder efforts to combat network bottlenecks. Virtualisation has led to distributed applications in a datacentre, creating more east-west traffic. 

For accurate monitoring of application performance, more visibility is required into the various interactions between hosts

Trevor Dearing, Gigamon

Transactions that were previously handled by a single host may now be split across multiple hosts in a datacentre or distributed between an on-premise host and a host located at a different location or a cloud provider.

Trevor Dearing, EMEA marketing director at Gigamon, says:“For accurate monitoring of application performance, more visibility is required into the various interactions between hosts. These interactions can lead to more traffic duplicates as switched network packets are captured from different locations in the network.”

Dearing says the use of new network virtualisation or SDN within a datacentre to create logical networks means issues could be hidden behind new encapsulations that the operational tools cannot decode.

Walmsley remains upbeat about SDN and NFV, which will underpin self-optimising networks infrastructure that can integrate and respond to applications in real time rather than just be a predictable, fast but relatively separate data transport.

But he warns that such changes bring some complexity, and the requirement to monitor and understand the “who, what, where, why and how?” of the network and applications will only increase the need for comprehensive application-aware networking monitoring and historical data capture.

The cloud clouds issues

Networks 's Peach warns that moves into the cloud could make the job of pinpointing network bottlenecks more difficult as network managers start to lose visibility into the network.

“It is much harder to diagnose and remedy any performance bottlenecks that may exists on a cloud provider’s infrastructure,” he says.

Peach says customers with hybrid cloud deployments tend to find it hard to determine whether their application performance issue is a network performance problem for them or for their cloud provider. “This can become a point of contention as it is difficult to prove matters without visibility,” he says. “This is definitely an of woe for IT professionals in general.”

All hail the self-optimising network

Network bottlenecks could become a thing of the past as embedded automation becomes part of the network. Joe Raccuglia, director of network solutions marketing at Alcatel-Lucent Enterprise, says this automation would provide more self-configuration, self-attachment, automated reconfiguration with adds, moves and changes of not just applications, but servers and other devices connecting into the network. 

“This technology will inevitably move from the datacentre environment out towards the enterprise campus network,” he says.

Read more on WAN performance and optimisation

CIO
Security
Networking
Data Center
Data Management
Close