Steve Broadhead, director of Broadband-Testing Labs, evaluates two products that boost network performance through traffic management
Over the past couple of years a new wave of Ethernet products have emerged that focus not so much on pure throughput, but on intelligently managing data and directing it to its destination as efficiently as possible
With IP service and application management and real traffic management at Layer 7, these products could be the starting point where Ethernet has become truly intelligent.
Here we assess the relative merits of this type of technology and put two key products through a series of tests specific to their abilities. The results were impressive.
F5 Big-IP 2400 IP application switch
When F5 launched its Big-IP product, it was conceptually a simple, gateway-esque, "in one end and out the other" approach to managing internet traffic and load-balancing devices such as servers, virtual private network gateways and firewalls.
Then the company introduced a switch-based range of products which has since been expanded to include the Big-IP 2400. With 16 10/100 and two Gigabit Ethernet ports, plus optional integrated SSL acceleration, the 2400 is not trying to play the role of a "me too" high port density Layer 3/4 Ethernet switch, but is intended for specific roles that rely more on intelligence than overall throughput, working at Layer 7.
The product aims to provide the enterprise user with a traffic management device to sit in front of a server farm, firewall farm, cache devices, or any similar application where load-balancing, based on reacting to a specific type of traffic is important.
One of the advances in traffic routing and control to emerge in recent times is content routing and switching - the ability for a device to make an intelligent decision about how to route a data packet, based on the packet's contents. As such it is particularly applicable to HTTP-based traffic, where so much information is available from the URL or even within a cookie.
With the Big-IP 2400, F5 introduced the iRule method of routing traffic and load balancing. This is a scripting language that allows you to totally customise how you deal with any incoming packet, based on any element of its content.
F5 also has an application interface architecture called iControl which lets users develop applications and utility programs that interface with the Big-IP for any number of functions, or to integrate F5 devices.
Because none of these features are of any use if the device has no resistance to downtime, the 2400 includes features to maintain service in the event of a hardware failure. In a configuration with dual Big-IP 2400 switches, it is possible to set up the system in a number of ways to support fail-over from the first to the second device, both in stateful and persistence modes, with a claimed failover time of less than 0.07 seconds - effectively instantaneous under testing.
One excellent aspect of setting up and managing a Big-IP is the interface itself. The F5 Configuration Utility is easy to use and very robust. It makes carrying out relatively complex set-ups very straightforward - the equivalent of creating a database using objects and multi-choice options, rather than coding it from scratch.
For the "intelligence" tests, Spirent Technology's Webavalanche and Webreflector client and server simulators were used to generate the test traffic.
The first test was to direct and load-balance HTTP, HTTPS and FTP traffic to specific servers, allocate values to this traffic and extract information for billing purposes - ideal for cost centre applications where users are billed directly.
A rule was created in the management software and it simply did what was asked of it. Then, using iControl, a billing system was created that monitored the Big-IP traffic on a per-TCP port basis and created billing invoices accordingly.
A second test involved setting up different streaming media servers - for Quick Time and Real Player client requests - and routing the requests to the appropriate server based on identifying the client type. An iRule checked for either Real Player or Quick Time identifiers in the packet headers and routed them to the correct server.
The third test involved terminating SSL-based traffic at the Big-IP while routing non-SSL traffic to a regular HTTP server. The aim was to distinguish between the cipher strength of different versions of Internet Explorer (40bit and 128bit) on the client requests being generated and route them to specific servers for each cipher level.
To do this, a rule was created which routed all 128bit traffic to one server and other traffic (below 128bit) to a separate server. The respective different user types were created on Webavalanche and the virtual servers on Webreflector, plus a "real" IIS server, also attached to the Big-IP.
The test achieved the planned 50:50 split between different cipher strength SSL traffic terminated at the Big-IP, as well as seeing any non-SSL traffic routed directly to the real IIS server.
SSL termination at the 2400 also gave a significant increase in performance against terminating SSL traffic directly at the real server. The test typically achieved up to 1,000-1,100 sessions a second, at which point performance flattened out.
A SQL Slammer worm simulation was created to check that the 2400 could cope with specific attacks. As with any kind of traffic "interrogation" the Big-IP discovers what kind of traffic it is dealing with by inspecting the packet contents and then repelling every packet containing the worm.
Finally, F5 has made a lot of noise about its persistence capabilities and their importance in day-to-day internet use. A classic example is with e-mail sessions, where it is essential to stick to the same e-mail server during a POP3 session.
A rule was created to control three sets of POP3 user sessions generated by Webavalanche. In each case the monitored connection - client IP address and server (member node) IP address combination - persisted throughout the test to completion.
Overall, the 2400 did everything asked of it and was very straightforward to configure and manage. A far cry from the old switching days.
NetScaler 9800 secure application switch
The NetScaler 9800 is towards the high end of a range that starts with a cut-down traffic accelerator, but the basic architecture is similar throughout the range.
NetScaler separates the generic device operating system - FreeBSD - from the custom device kernel which carries out the majority of the processing requests. This means there is no overhead when the device has to request assistance from the operating system, as is the case with some other Layer 7 switches. This is especially important in applications such as Secure Sockets Layer acceleration.
NetScaler aims to let companies secure their web applications, attain the highest levels of application availability, complete more transactions with faster responses and instantly free up server resources for redeployment. In addition to cutting costs, NetScaler promises to minimise hardware investments.
Typically, on one side of a network is the internet or intranet and on the other side is the datacentre, server farms, firewalls, caches, etc. In between sits the NetScaler device, which services both sides, external and internal. Rather than having a high number of ports, the 9800 is focused on being the meat in the sandwich, with just four 10/100/1000 Ethernet ports to connect to the datacentre and outside world. It has redundancy in the form of dual power supply units and failover mode with duplicate devices.
NetScaler's claim to fame is its patented "request switching" suite of technologies. This is designed to handle web traffic as efficiently as possible by analysing and directing incoming traffic at the application request level.
The company said the capability to examine within the actual payload will be added to a future release. But what it can do now, it does well. This includes TCP offload and optimisation, data compression, static and dynamic caching, SSL acceleration, prevention of distributed denial of service attack and other security intrusion.
Device management is via either a call-level interface (CLI), which is used to set-up the initial configuration, IP address, etc, or a browser-based graphical user interface. The latter provides a broad subset of the manageable features of the switch, but the CLI - including shell access to the FreeBSD operating system - is required for some configurations.
In addition to the browser-based manager, there is also a "dashboard" - a statistics/monitor screen which provides a number of different performance and traffic breakdowns in graphical or tabular format.
A web server spends a huge amount of its processing power and memory resource processing TCP requests and the result is dog-slow server access. Sitting the NetScaler 9800 system in front of the servers enabled TCP requests to be offloaded onto the NetScaler box.
The result is staggering for something so simple to configure. Want to turn an old Pentium II box into a superserver? Running HTTP/HTTPS web traffic directly at the server, then offloading it via the NetScaler gave a best figure of a 3,000% increase in server traffic handling capability. Even with well-specified, Pentium 4 servers the test result showed a sixfold increase in performance. In practice, that meant that old Pentium II servers, with the assistance of the NetScaler, were performing five times faster than the Pentium 4 servers running without assistance.
Pure SSL termination tests - terminating traffic at the NetScaler, rather than the HP Pentium 4 server - resulted in 307,745 successful transactions out of an attempted 345,203, as opposed to just 11,090 successes out of 233,029 attempted, running the same test back to back, but terminating at the server.
A simulated 56kbps modem link to tested low-bandwidth access to HTTP applications, simulating attachments to test the compression capabilities of the NetScaler got a 300% improvement. Cache-based testing gave a 400% improvement, using the integrated caching capabilities of the NetScaler.
What is really impressive is that the 9800 is capable of supporting all these and more, such as SSL VPN services, simultaneously. Even when the device was pushed really hard, the NetScaler CPU levels rarely rose above 25%-35% utilisation, memory likewise.
Steve Broadhead, Broadband-Testing
Steve Broadhead runs Broadband-Testing Labs, a spin-off from independent test organisation the NSS Group.
His IT and networking experience dates back to the early 1980s, where he worked deploying and managing PC networks for two insurance companies, after which he moved into computer journalism.
In 1991 he formed Comnet, which became the NSS Group, with Bob Walder, specialising in network product testing for suppliers and the publishing industry.
In 1998, Broadhead created the NSS labs and seminar centre in the Languedoc region of France, offering a wide range of test and media services to the IT industry. Now named Broadband-Testing, it focuses on network infrastructure product testing and related areas.
Author of DSL and Metro Ethernet reports, Broadhead is now involved in a number of projects in the broadband, mobile, network management and wireless Lan areas, from product testing to service design and implementation
This was first published in May 2004