The key to making the most of wide area network (WAN) links lies in grooming and reducing the traffic that travels across them, and in avoiding as many potential sources of delay as possible. WAN accelerators are typically sold as pairs or collections of appliances or devices intended to sit outside a router or firewall between the network boundary and an actual WAN link. As you ponder possible choices for WAN bandwidth optimisation tools, you'll want to evaluate WAN optimisation controllers by how many tools and techniques they support and how well they implement them. Ultimately, the best way to determine what works for you is to test such equipment, which is why you should capture and use a sample traffic load that is characteristic of your network to provide a basis for comparison during tests.
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
The following WAN bandwidth optimisation tools and techniques play important roles in enabling WAN optimisation appliances to move the most traffic and achieve the highest throughput across an organisation's WAN links:
- Protocol substitution or protocol proxy
- Hardware compression
- Compression/symbol dictionaries (aka deduplication)
- Object caching
- Traffic shaping and management
- Traffic prioritisation and grooming
- Forward error correction
Each of these WAN bandwidth optimisation techniques is described in order below.
Any chatty protocol (a protocol that involves lots of back-and-forth messaging between peers, or clients and servers) typically doesn't behave well when extended across wide-area links. Where outright protocol substitution isn't feasible, many WAN bandwidth optimisation devices terminate protocol connections for things such as CIFS (Common Internet File System) locally, then substitute another more streamlined protocol to encapsulate key traffic elements across wide-area links. In action, a 30 MB file transfer may take as long as seven minutes across a WAN link using CIFS, but that delay can be reduced to under a minute using Riverbed's wide-area file services (WAFS) instead.
In general, compression refers to any mathematical pattern analysis and bit or string substitution technique that analyzes traffic contents and replaces longer patterns or strings with shorter ones. Usually, these are obtained by applying various encoding techniques that seek to eliminate repetition in data blocks and replace repeated elements with short, symbolic pointers to original content as a way to reduce the volume of data that transits a WAN link. When this kind of volume reduction is handled by device hardware it runs fastest, and this explains why hardware compression is generally considered mandatory in state-of-the-art WAN optimisation devices.
Compression/symbol dictionaries (aka deduplication)
Compression dictionaries are collections of arbitrarily long strings (or even entire files) that appliances on each end of a WAN link need to exchange only once, after which they may be associated with short, unique symbols that may be 64 to 256 bits in length. Once the dictionaries for a pair of devices are synchronized, as repeated patterns or content are detected in outgoing traffic, they will be replaced with a unique symbol that references the original uncompressed information in a dictionary, then sent across the WAN link. The receiving device will then replace each symbol it recognizes in incoming traffic with its copy of the original information to restore the content to its original form.
This WAN bandwidth optimisation technique eliminates the need to send duplication strings or files across a WAN link, which is why it's often called deduplication.
Object caching involves exchanging and managing stored collections of software objects between pairs of devices and represents another way to implement shared compression and symbol dictionaries. In addition, this approach generally associates some kind of refresh interval or session timeout/age-out information with objects in the cache to force them to be refreshed whenever such intervals expire.
Traffic shaping and management
WAN optimisation devices can apply all kinds of traffic shaping and management techniques to speed time- or latency-sensitive packets on their way while relegating time- or latency-insensitive packets to available bandwidth that might otherwise go unused over time. When traffic shaping is applied to a set of packets (which is usually called a flow or a stream) it imposes additional delays on some packets so that they conform to a predefined set of constraints called a traffic contract or a traffic profile. This lets WAN devices control the volume of traffic sent across a link over a specific period (known as bandwidth throttling) or the maximum rate at which traffic may transit the link (known as rate limiting). Sometimes, more complex regimes may also be applied, such as the generic cell rate algorithm used to shape traffic on ATM networks.
Traffic prioritisation and grooming
Some traffic needs to go faster than others or, at least, be subject to minimal or predefined ceilings on latency. Prioritisation essentially pushes such traffic to the head of all the queues under its control and helps speed such packets on their way. This is a natural consequence of quality of service (QoS) regimes or of service-level agreement (SLA) guarantees for latency, throughput, response time and so forth. WAN optimisation devices play key roles in helping to define, monitor and manage QoS and other priority schemes.
Traffic grooming ensures not only that bandwidth is subject to priority but that unwanted or potentially dangerous protocols are either blocked from accessing a WAN link or limited to exceedingly small bandwidth allocations. Think about various peer-to-peer protocols that don't have legitimate business uses or various kinds of streaming multimedia protocols for watching movies or videos that have no normal place at work. Traffic grooming can prevent such protocols from consuming precious bandwidth. Many experts believe that minuscule allocations for such protocols are desirable because they permit such traffic to move and thus can trace traffic back to senders or receivers of the same!
Forward error correction
Forward error correction (FEC) is a method of obtaining error control in data transmission where the transmitter sends redundant data and the destination recognizes only the portion of the data that contains no apparent errors. Strict latency requirements for some kinds of packets, especially those used for voice, video or multimedia communications, require packets that age more than a certain threshold amount to be discarded. WAN optimisation devices can include error correction bits into all such packets without adding excessive overhead to this kind of traffic; that data can then be used to reconstruct discarded packets on the receiving end of an appliance pair. This helps control jitter and helps keep streaming communications and voice traffic smoother and more intelligible, even when such errors get corrected at the tail end of a set of packet transfers. As long as enough traffic gets through to permit error correction to work, resulting traffic will be smooth enough to deliver a satisfactory user experience.
What IT managers should know about WAN bandwidth optimisation techniques
When a WAN device combines multiple optimisation tools and techniques to maximize wide area bandwidth and throughput and minimize latency and data loss, an organisation can make more and better use of its WAN links and can often accommodate growth across existing links without having to acquire additional bandwidth capacity. The payback is lower recurring communications costs, which usually more than offset the costs involved in acquiring, deploying and maintaining this kind of optimisation hardware.
About the author:
Ed Tittel is a writer who's contributed to over 130 computer books, plus hundreds to thousands of Web and magazine articles. He is also the technology editor for Certification magazine, serves as editor-in-chief for NetPerformance.com, and writes tips and handles Q&A for numerous TechTarget Web sites. He was series editor for Exam Cram from 1997 until 2005, and is the recipient of the 2004 Networking Professional Association Career Achievement Award. He was also named Runner-up for the "Favorite Study Guide Author" category in the CertCities.com "Readers Choice Awards" from 2002 through 2005.