Controlling WAN Bandwidth: Application Acceleration

Richard Chirgwin explains the source, operations and applications of WAN acceleration.

Here's what happened: over time, corporate network buyers settled on TCP/IP-based services as the solution to problems of network complexity and convergence. By creating a single environment able to collapse the entire enterprise traffic stream onto one routed network, businesses vastly simplified their telecommunications purchases. Instead of complex calculations trading off bandwidth, QoS and cost, smart businesses purchased big TCP/IP pipes to cary all their traffic.

Here's what happened next: those same businesses discovered that what they liked about complex networks was that their applications worked well. Vanilla TCP/IP works fine for e-mail and browsing, but poses problems for time-sensitive traffic like VoIP, Citrix, and mainframe connections.

In other words, companies that saved on expensive QoS-managed carrier links find themselves in need of a way to replicate the world of the carrier: protecting the bandwidth given to their important applications without infinitely expanding the pipes carrying the traffic.

That need has been filled by two classes of product which themselves are undergoing a slow convergence: bandwidth management systems, and application acceleration systems.

In this article, TechTarget concentrates on application accelerators – but it's worth knowing how they differ from bandwidth management products.

What's the Difference?

“There are two camps in WAN optimisation,” explains Greg Bunt, director of advanced technologies for Juniper Asia Pacific. “To take the Gartner and Forrester view, there are controllers on the one hand and accelerators on the other.”

The best way to understand the difference is to take the simplest case for both devices. The bandwidth manager in history works on the pipes: it looks at IP packets as passing from the LAN to the WAN, and if the WAN is congested, it either buffers or drops those packets associated with low-priority traffic.

This basic queuing mechanism, Bunt said, suffers two problems: oversubscription (too many applications chasing too little capacity), and latency (because the processing time needed to queue and manage the traffic means it takes longer to pass the packets).

The application accelerator's heritage is closer to the application layer: it looks at the traffic of particular applications to identify what needs to traverse the WAN link. The heritage of the application accelerator is in Web acceleration.

On the Accelerator

The application acceleration class, on the other hand, has evolvecd a more complex set of capabilities. In general, Bunt said, application acceleration is designed to address caching, latency, QoS management (deciding what application traffic needs to be protected), and reporting.

Caching – Caching is the longest-established practise. Evolved to serve ISPs as the Internet exploded in the 1990s, it still works on the basis that if you bring the content closer to the user, you'll save bandwidth while at the same time giving users a better experience.

Compression and Acceleration – Where applications are designed for low-latency links (which includes various data apps like Citrix as well as realtime applications such as voice and vide), a fast but high-latency link stops the application taking advantage of all the bandwidth available to it. “For Windows file services, one-fiftieth of a second of latency can mean with a 100 Mbps link you can only get throughput of 25 Mbps.”

QoS – If the link lacks either QoS support or bandwidth, the enterprise needs to decide not only wha to discard, but what to protect. QoS in the application acceleration environment helps – and also makes sure that as far as possible, you can provide a consistent user experience all the way out to the home office.


Caching in the application acceleration world has to do more than simply grab Web pages and hold them for the next user. In enterprise environments, much more of the content is unique (or nearly so). An IP videoconference, for example, doesn't lend itself to caching (although the weekly video providing staff with their obligatory Address from The Dear Leader can easily be cached).

But even in data applications, which most people think of as well-suited to caching, there are catches.

“The drag-and-drop of a file in Windows needs about 224 round trips,” explains Steve Dixon of Riverbed Technologies. Caching this only helps where data exists in the cache, and here, Dixon said, Riverbed's aim is to decompose files and look for common data between different files.

“We break data such as files down into segments which are recognised by a signature ... and insetad of sending non-unique data, you just send a reference point to data that you've alreadyh sent.

“So if somebody edits a PowerPoint, you only send what's changed ... even if there's different content between slides, perhaps as much as half the file size is common between the two files.”

Gavin Matthews, CIO of Seccom Networks (a consulting firm with a specialty in various forms of bandwidth management), says this kind of caching is often deployed in point-to-point environments where similar systems are deployed at both ends of a link.

“These days, you would expect at least a 5% to 10% benefit no matter which vendor you use.”

However, he warns, even some data applications get only limited benefits out of this aspect of application acceleration.

“IBM mainframe or AS/400 traffic, where SNA is encapsulted in TCP/IP – that's solid payload. It's hard to get an improvement ou of that.

Mick Stephens, General Manager for RADWare Australia/New Zealand, agrees that “green screen” applications don't fit the caching part of application acceleration, but for most enterprises, these are also far less demanding because “the resources demanded by a terminal-based service are so much lower,” he says.

“The capacity requirements of so-called 'blue-screen' Web based services are so often misunderstood. Not only is the protocol very chatty, but there's always a lot of additional content added to Web-based applications.”

Compression and Acceleration

As with caching, the amount of improvement an enterprise can expect out of the compression-acceleration part of the application accelerator depends on the application as much as the technology.

“There's a number of different compression, bit pattern recognition and sequencing technologies out there,” Bunt says – but it comes down to how compressible the content itself is. “VoIP conversations have already gone through their codecs. And if you try to zip a PDF file, it may not get any smaller.” Ideally, Bunt says, the accelerator you deploy should be able to handle apply different techniques to different traffic types.

The ability to spoof protocols is another part of the compression-acceleration equation, Dixon said. Here, the aim is to take protocol traffic off the WAN due to chatty protocols (rather than chatty applications). “We spoof the prptocol at each end – at the server end, the Riverbend appliance looks like a client, and at the client end, it appears as a server.

“We can support MAPI, HTTP/HTTPS, SQL, NFS and notes in addition to SIFS.”


Applying QoS – making life easier for important traffic rather than just dropping the unimportant traffic – is another place where application acceleration has come far beyond its early days.

Part of this, Bunt said, is neccessity. Early bandwidth managers could make their traffic decisions based on TCP/IP addressing and port numbering. But as everybody moves their applications onto Port 80 (that is, using Web interfaces as the default interface for everything), port numbering no longer indicates the importance of the traffic.

So working out the type of traffic that's important, back in head office, and protecting that traffic through a variety of QoS schemes – this is important, but so is your understanding of the kind of client systems connecting to the network.

“The user in a branch office, the user in a home office, the PDA user in the airport lounge: all of these will have different acceleration issues,” Stephens said. And while the migration of “everything to the browser” has devalued IP ports as an identifier of traffic, there are benefits as well. If you're trying to learn something about the user logging into the application, then the kind of browser can be a big help.

So QoS technologies aren't just about saving all VoIP traffic from the assault of large downloads – it's also how the application accelerator can help get the teleworkers a user experience that feels like being in the office.


You may have noticed that some of these techniques are going to have security implications. For example, doesn't the business of protocol spoofing look somewhat like a form of “man-in-the-middle” attack?

Yes, says Dixon – and it's something that all application acceleration vendors have to address. “The products have to be designed in such a way to remove that risk.” In particular, he said, protocol spoofing has to know when to “get out of the way”, and not interfere with protocols that are unfamiliar to it.

And it's probably no surprise that increasinly players in the application acceleration market have security alignment (three of the vendors interviewed for this article have an explicit security specialty – RADWare, Fortinet and Juniper).

Fortinet sales engineer Andrew Bycroft says this association should come as no surprise, since security vendors also have a long history of identifying applications. “Being able to identify the application is a key to deciding how to treat that application,” he said.

In addition, Bycroft said, the security sector also has long experience in applying policy management to application traffic: “Security has long been driven around policy,” he explained, “so there's a benefit in identifying and inspecting the application traffic both for performance and security.”

Planning, Analysis the Secret

Of course, you'll only work out how to configure the application acceleration if you have some idea of the traffic already on your network. Gavin Matthews says the starting point, regardless of which vendor Seccom ends up recommending to a customer, is always to run a traffic analysis and build a pilot network.

Customers, Matthews said, need to get between 20% and 30% savings on their bandwidth to get ROI on this kind of system.

“Nine times out of ten, I deploy a pilot application, and work with the vendor to determine the best optimisation settings,” Matthews said.

“Around 40% of bandwidth will be used by hard-to-fix applications, so you'll be working out how much you can save of that other 60%.”

And that makes the right assessment far more important – since of the traffic available to optimisation, the real story is that you'll need a 50% improvement.

“Start by deploying network analysis – Wireshark or Ethereal – to get the analysis. Then sit down with the project team to work out what must get priority, and what you're prepared to give up,” Matthews said.

“And when you select a vendor, run a pilot for at least 30 days to get a good spread of information.”

Read more on WAN performance and optimisation

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.