Even in this age of nearly universal broadband availability, bandwidth is always a premium commodity when it comes to WANs. Because the available WAN bandwidth is limited, it is usually important to make the best possible use of it. In this article, I discuss some common WAN optimisation and acceleration technologies.
Application acceleration is not really a separate WAN optimisation technology but rather a collection of technologies. An application acceleration appliance will typically rely on many of the technologies that I discuss in this article.
A key component of an application acceleration appliance is SSL encryption and compression offloading. Both encryption and compression are very processor-intensive. An application acceleration appliance can offload these tasks, freeing the processor to spend more time on other tasks. Many application accelerators also offer features such as caching and load balancing, which are discussed below.
Bandwidth shaping, also known as bandwidth limiting or traffic shaping, is a technology that allows you to limit the amount of bandwidth consumed by a user or by an application. There are many bandwidth-shaping products on the market, but bandwidth can typically be throttled based on IP address, MAC address, network subnet, or service type. Some products also support bandwidth based on a packet's source and destination addresses.
Bandwidth-shaping applications are usually policy driven. Policies are normally designed to limit the maximum amount of bandwidth that a user or an application can consume. For example, a policy might limit users to a certain amount of bandwidth unless they were streaming voice or video, in which case the limit would not be in effect.
Branch offices have long presented a challenge for IT departments, especially when it comes to applications residing on a server. Think about it for a moment. If the application is hosted on a single server, then employees in branch offices have to deal with issues such as link failures and latency. If, on the other hand, the application is distributed to servers in each branch office, management of that application becomes a nightmare. Caching provides a sort of happy medium.
Caching allows remote users to access centralised applications as if they were local by caching parts of the file system. One example of an application that could benefit from caching is Exchange Server. Imagine for a moment that someone sends a message with a large attachment to everyone in a branch office. Even though Exchange Server uses single-instance storage for messages, the attachment would have to be downloaded over the WAN link each time someone in the branch office opened the message (assuming that the branch office didn't have its own Exchange Server). If a caching solution were in place, however, the attachment could be cached so that it would not have to be sent repeatedly over the WAN link.
Data compression works by eliminating the redundancy from a stream of packets. The compression algorithm looks for patterns in the data stream and replaces redundant information with a small code that can be transmitted more efficiently than the raw data.
Compression typically takes place at the router and can be hardware or software based. If compression is software based, then the router's CPU applies the compression algorithm to the outbound packets. If compression is hardware based, then the compression-related tasks are offloaded to a dedicated chip so that the router's CPU is not burdened by compression-related overhead. At first, it may sound as if hardware-based compression is more efficient than software compression, but as long as a router's CPU can handle the task without pushing the CPU to 100% utilisation, the differences between hardware- and software-based compression are usually negligible.
There are many types of compression in use today, but not all compression algorithms are suitable for all mediums. Likewise, the chosen compression algorithm must be supported by the sending router and the receiving router.
Two of the most common types of compression are Layer 2 payload compression and TCP/IP header compression. Layer 2 payload compression works by compressing the payload of a Layer 2 protocol (such as HDLC, PPP or X.25). The Layer 2 header itself isn't compressed, but the contents of the payload are. Layer 2 payload compression can be used on any type of traffic provided that it has not already been compressed. This type of compression is most suitable for WAN links ranging from 56 Kbps to 1.544 Mbps.
TCP/IP header compression is a compression scheme that is designed to be used on slow WAN links (32 Kbps or less) with small packet sizes. In this scheme, both the sender and the recipient store a copy of the initial TCP/IP packet header. From there, subsequent packets are stripped of all redundant information, leaving data unique to that packet header. Typically, a 40-byte packet header can be reduced to 5 bytes.
Load balancing is a technology that is designed to distribute traffic evenly across multiple networks in order to avoid saturating any one WAN link. An even distribution of the traffic across the available links ensures that each link is used in as efficient a manner as possible.
Load balancing is categorised as either inbound or outbound. Inbound load balancing greatly benefits organisations that receive a lot of requests from the Internet, such as organisations that host their own Web or mail servers. Outbound load balancing is of greater benefit to users who need to upload large files or to servers that send out large quantities of email.
An added benefit of load balancing is that it supports fault tolerance. If one of the links fails, traffic can continue to flow through the remaining link until the failed link has been restored.
Quality of Service
Quality of Service (QoS) is similar to bandwidth shaping in that it is a mechanism for regulating network bandwidth. Where bandwidth shaping is mostly for limiting bandwidth usage, though, QoS is a technology that allows a user or application to reserve bandwidth to guarantee that it will be available when needed.
Normal network links use only a best-effort packet delivery. That means that if the network link is congested with traffic, packets will probably be slow to reach their destinations. With QoS, you can define bandwidth requirements for users or applications. These bandwidth requirements can be defined in either absolute or relative terms (10 Mbps or 10% of the total available bandwidth, for example).
To see how QoS can benefit an organisation, imagine you have a voice over IP (VoIP) application that requires a minimum sustained throughput of 1 Mbps. If that were the case, you could create a policy that reserved 1 Mbps of bandwidth for the VoIP application.
Of course, the VoIP application probably isn't being used all the time. When the application isn't being used, other applications are free to use the bandwidth that has been reserved for the VoIP application. When the VoIP application attempts to communicate across the link, though, all other communications are throttled back in order to give the VoIP application the amount of bandwidth that has been reserved for it.
You can control the following network properties for a network that supports QoS:
- Throughput (total bandwidth reserved)
- Packet loss and retransmission
Route optimisation, more commonly referred to as smart routing or route control, is an optimisation technology for multi-homed networks. In case you are not familiar with multi-homed networks, the idea behind them is that rather than having a single WAN connection, a company can have multiple WAN connections that accomplish the same task. For example, rather than having a single WAN connection to an ISP, a company might have several WAN connections to a single ISP or to multiple ISPs. Smart routing, or route optimisation, comes into play when a company can actively control how traffic is distributed among the various WAN links.
When an enterprise has a multi-homed Internet connection, it usually uses a protocol called BGP (Border Gateway Protocol) as the mechanism that's responsible for making decisions regarding Internet routing. The problem is that the BGP can detect a route failure but cannot detect a brownout, which is of course much more common than a total link failure. This means that users' packets may be routed over the link that is experiencing the brownout. This inevitably leads to user complaints about poor Internet performance.
Route optimisation is a technology that monitors all of the available external links in real time and routes packets accordingly. Typically, a route optimisation solution would look at such things as the link's latency, stability, performance and cost when making routing decisions.
As you can see, there are many technologies that can be used to make WAN communications more efficient. Of course, not every WAN optimisation technology is suitable for every WAN link. The trick is figuring out which technology will work best for your particular situation.
About the author: Brien M. Posey, MCSE, is a Microsoft Most Valuable Professional for his work with Windows 2000 Server and IIS. Brien has served as CIO for a nationwide chain of hospitals and was once in charge of IT security for Fort Knox. As a freelance technical writer, he has written for Microsoft, CNET, ZDNet, TechTarget, MSD2D, Relevant Technologies and other technology companies. You can visit Brien's personal Web site at www.brienposey.com.