Column

Evaluating Fibre Channel, InfiniBand and Ethernet protocols for network virtualisation

Chris Mellor

Data centre networking is as primitive as early railways. It would be simpler if there was just one base networking architecture to carry all the traffic, just as there is a standard railway gauge allowing freight trains, express trains and stopping trains to all use the same tracks.

As it is, however, PC-to-server, server-to-server, server-to-networked-storage and server-to-clustered-server networks in the same data centre can all be different with different end-points, cables, controllers and software. Such complexity is costly. If we are to move to a single data centre networking platform, it has to provide the highest bandwidth and lowest latency needs that are currently required, as well as be divisible into channels that portion out networking capacity as it is needed.

InfiniBand is here now and has a speed advantage but Ethernet will catch up and surpass that and, over time, will be less expensive.

 

,

 Just like server and storage virtualisation, network virtualisation could achieve the same result of far better utilisation of network assets. But how can network virtualisation be done? There are three candidate protocols: Ethernet, Fibre Channel and InfiniBand.

Ethernet protocol is used throughout the data centre; it is the LAN standard and can carry TCP/IP traffic. There are 1 Gbit and 10Gbit Ethernet links available with 100Gbps in prospect. Although Ethernet has an unpredictable latency, may lose data (packets) and is not point-to-point, it is more than good enough to carry PC-to-server traffic and so provide file-based, server-to-NAS traffic. It is also being used for iSCSI block-level access to SANs.

However, SANs were originally developed with a lossless and predictable latency switched Fibre Channel (FC) network which currently operates at 4Gbps and is transitioning to 8Gbps. Indeed, Xyratex has just released the first storage array with a native 8Gbps Fibre channel interface, enabling the first end-to-end 8Gbps FC SANs to be built.

Neither Ethernet nor Fibre Channel (FC) is fast enough (FC and Ethernet) or dependable enough (Ethernet) for connecting clustered high-speed servers to each other and to shared storage that they access in high-performance computing (HPC) environments for supercomputing and cutting-edge life sciences applications. For this, a third network interconnect is used, InfiniBand, running at 20Gbps and with 40Gbps in development.

We can characterise Ethernet as the commuter railway, FC as the freight line and InfiniBand as the express service. With FC only being used for SANS and not for any other traffic, and its speed only now reaching 8Gbps end-to-end, its candidacy as a general data centre network platform is dead before it starts.

Ethernet, at 10Gbit/s, is nominally fast enough to carry FC storage traffic. But its latency is indeterminate and it can lose data, necessitating retransmission which further slows access. This is not good news for high-speed SAN access and especially not for HPC applications.

A faster Ethernet at 100Gbps would be better. Instead, a new form of Ethernet is being worked up to become a standard product in a couple of years. Called Converged Enhanced Ethernet (CEE) or Data Centre Ethernet (DCE), this new form of Ethernet would be lossless and would have a predictable latency.

 InfiniBand suppliers, such as Voltaire and Mellanox, think that InfiniBand has the bandwidth and manageability to be the single data centre fabric now. It can carry FC or Ethernet traffic and be readily subdivided into differently sized channels so that one link can carry traffic of different kinds. Supercomputers are not generally found in ordinary business data centres but HPC-style computing is beginning to spread out from its supercomputer and life sciences market niches into data warehousing and business intelligence environments, where vast databases need querying in parallel to produce answers to complicated questions in a reasonable time.

HP's just-announced Exadata server uses InfiniBand links and is supplied to Oracle for its Database Machine, a packaged data warehouse and business intelligence system with around 20 servers connected via an internal InfiniBand network, comprising a dedicated HPC product for data warehousing. This shows InfiniBand use entering general business data centres.

None of the major networking suppliers are in favour of this. Both Cisco, the overall network market leader, and Brocade, the overall FC market leader, think that Ethernet can and should become the single data centre fabric. FC traffic can run over Ethernet using the Fibre Channel over Ethernet (FCoE) product and FCoE end-points -- converged network adapters (CNAs) -- have been built and demonstrated along with FCoE-capable switches by Brocade, Cisco, Emulex and QLogic.

Voltaire would have us know that InfiniBand is here already, uses less energy on a per-port basis than an equivalent Ethernet setup, and has FC- and Ethernet-carrying capabilities. Start converging data centre networking traffic now, says Voltaire: Use InfiniBand as the wire and carry all other data centre networks layered on top of it.

Could this work? It could. However, Cisco would say that there are so many millions and millions of Ethernet ports in the world that the manufacturing cost of Ethernet kit will fall much faster than InfiniBand through sheer volume. Yes, InfiniBand is here now and has a speed advantage. But Ethernet will catch up and surpass that and, over time, will be less expensive. Better to wait and move to the expected clear winner of this contest than spend time and money on InfiniBand, which is just going to be a diversion before the main event.

For anyone with Ethernet and without InfiniBand, this is an attractive message. For the InfiniBand suppliers, it presents a mountain to climb and they have to push their technology as strongly as they can to maintain their edge while being affordable enough to build on the HP Exadata data warehousing beachhead. What they need is an adoption of high-end server clusters in general data centres with InfiniBand used as the interconnect.


 

COMMENTS powered by Disqus  //  Commenting policy