Are we entering the age of 10 gigabit ethernet necessity?

It is not long ago I was asking in various broadband-testing reports whether we really need widespread Gigabit Ethernet. But now the question is, do we need...

It is not long ago I was asking in various broadband-testing reports whether we really need widespread Gigabit Ethernet. But now the question is, do we need 10 Gigabit Ethernet?

There is a very obvious answer to that question. If you now have lots of gigabit connections to your desktops and laptops alike on your network, then simple arithmetic dictates that, in order to accommodate all of these gigabit connections, you need something larger than a gigabit core backbone and server connections.

We put this basic logic to the test in the US with Force10 Networks, and in the UK with Solarflare. With the former we focused on presenting the case of multiple - in this case 384 - gigabit Ethernet connections, with the latter we concentrated on how this level of traffic would be handled at the server, courtesy of 10 Gigabit Ethernet controllers.

Real-world applications

We never test for the sake of it, so there had to be some real-world applications. Starting with the Force10 testing, on a general level we wanted to see if it was really possible to offer line-rate performance with a huge port count and achieve maximum uptime in a single product. After all, performance without reliability is nothing.

Another point we wanted to stress is that applications are changing. For years, IT designed the enterprise network architecture to emphasise different characteristics at each tier - high performance and high availability in the datacentre, scalability in the Lan core, and low cost and basic functionality in the wiring closet.

These design principles no longer apply. Why? Because the traffic mix is far more varied than it once was, with traditional data - database, office applications - and new generation traffic such as block graphic/bulk data, peer-to-peer and other real-time applications making use of voice and video all running concurrently, often for business-critical processes.

All these application types have to be catered for, in terms of prioritisation, guaranteed bandwidth and absolute availability - not a trivial requirement, even in a core Lan environment. First port of call therefore was to prove an environment and product such as Force10's C300 Ethernet switch chassis could support this range and breadth of application types at full line-rate, and this means low latency.

Zero latency

On full loads we achieve average latency figures of under 40 microseconds - microseconds, note. This is virtually zero latency. Of course, we did not stop there. We threw, care of Ixia's XM12 mega test kit, a combination of HTTP, VoIP and video traffic - triple play - at the switch. We successfully sustained this traffic mix over 286 Ethernet ports, with 12,000 SIP-based VoIP sessions, almost 13,000 HTTP sessions and 30,000 live video sessions, meaning more than 50,000 users being concurrently supported in a triple-play environment.

Impressive stuff but, in turn, this means there has to be a "hit" in other parts of the network and not simply the core, but at the servers themselves. Hence the 10 gigabit argument and the emergence of products such as Solarflare's Solarstorm10 controller that we also put to the test recently.

Affordability is another key accelerator to demand. Thanks to stupendously low costs per port for Gigabit Ethernet copper connections, people are simply throwing 10/100/1000 into their networks by default.

Although this, at one point, would have been irrelevant because of the inability of the client machines, and even the servers, to generate a gigabit of traffic, thanks to the enormous recent advances in core processor and serial I/O PCI Express replacing PCI-X bus technology, relatively low-end servers are now fully capable of supporting very high bandwidths.

What were single-core, single-CPU systems are commonly now dual-CPU with dual- or even quad-core architectures. Perhaps more importantly, the operating systems and applications are now more able to take advantage of these multi-core, multi-processor architectures. And we have not even started to look at virtualised environments.

With top research companies such as Dell'Oro Group forecasting a major uptake of 10 Gigabit Ethernet switch ports in 2008, the requirement for 10 Gigabit Ethernet at the server will be magnified even more. This provides us with a valid case for widespread adoption of 10 Gigabit Ethernet, both in the core of the network and the server.

Whereas throwing bandwidth at the problem is never a sensible option on the Wan or internet, owing to the inherent latency issues that need to be resolved, in the core of a well-designed Lan, that delay problem is typically far less of an issue. However, the need to guarantee bandwidth availability is very much a requirement, given the increasing use of real-time applications such as VoIP and video.

Spare bandwidth

In these environments, it is essential that spare bandwidth is always available in the core and at the server, from where most of this traffic originates. Moreover, no network manager in their right mind would run a network at near capacity - and herein lies another potential problem at the server, namely saturating the CPU and hence effectively killing the server. We therefore need high-bandwidth support, but in a way that does not over-stress the server, giving sufficient CPU time to carry out the many and various tasks these servers are actually being used for.

Over the years in the broadband testing labs we have seen many times just how easy it is to overwhelm a server. Although the latest Intel and AMD technology has certainly moved the game on here, any way that CPU time can be saved is a huge bonus. For the Solarflare controller test, we wanted to see not just what levels of true performance it could achieve in a variety of different test scenarios, but also how efficient it was in its ability to provide significant levels of idle time at the CPU and hence free up the server for other duties.

Virtualised environments

These scenarios focused on performing at line-rate with uni- and bi-directional traffic flows, then transferring that testing environment into a virtualised environment with VMware and Xen, and finally seeing how it could optimise an iSCSI environment.

Using Ixia's Chariot v6.5 application to generate traffic at 10 and 20 Gbps (uni/bi-directional) we ran a series of tests over a number of different traffic stream variants on different server platforms - Windows 2003, Windows 2008 and Linux Redhat.

With a standard combination of 64Kbyte data files and 1,500-byte Ethernet frame size, we found that we were able to hit and sustain line-rate while still witnessing significant idle time on the server CPUs, meaning we were achieving line-rate without getting anywhere near maximising the server performance capacity.

We repeated these line-rate tests in two virtualised environments, using VMware and Xen v3.3 respectively as the virtual machine environments. Testing VMware ESX v3.5 we were able to achieve 9.3Gbps, considered line-rate given the maximum recordable figures using the 1500 byte frame size (a purely arithmetic limitation) - an excellent result compared with any previously published performance figures we have seen in a virtualised environment.

In a Xen test we also achieved line rate, given the parameters of the test setup, which was arguably even more impressive, given Xen's relatively limited benchmarking to date.

Testing in the iSCSI environment focused on how the Solarflare Solarstorm controller could enhance performance with the iSCSI data digest feature enabled. This provides data corruption protection but also imparts a very significant performance overhead, and is usually disabled for this reason.

The ideal solution is to have the digest enabled to protect data, but also to remove that performance overhead as much as possible. The Solarstorm controller managed to improve digest-enabled performance by more than 300%, giving us near 10 gigabit line-rate over iSCSI, with full corruption protection.

Now isn't that sufficient, application-based evidence that 10 Gigabit Ethernet is a truly valid, here and now, technology? We certainly think so.

Survive the wait for faster ethernet

➔ www.computerweekly.com/228363.htm




 

Read more on Networking hardware

SearchCIO
SearchSecurity
SearchNetworking
SearchDataCenter
SearchDataManagement
Close