Ethernet is 31 years old this year and well-established as a networking technology. It is also capable of supporting all existing storage protocols—NFS, CIFS and iSCSI as well as Fibre Channel via Fibre Channel over Ethernet (FCoE). But, it is still not dominant as a carrier of block access data, where Fibre Channel still reigns supreme. The question is, then, can Ethernet storage become king of the data centre?
In this quest, Ethernet has the advantage of being well-understood. After three decades, it is ubiquitous as a Layer 2 communications protocol. This means that every company has expertise and that the cost of acquisition is relatively low.
The technology is also scalable, both in terms of distance and in terms of bandwidth. Ethernet networks can stretch across whole continents when arranged with the right provider. And thanks to the establishment of new standards for Fast Ethernet, the protocol can also be used at speeds of up to 100 Gbps.
Existing storage protocols that run on Ethernet include the Network File System (NFS), which currently supports a third of all databases, according to the Storage Network Industry Association (SNIA). And, the Common Internet File System (CIFS) is ubiquitous in the Windows world, while iSCSI provides a cheap hop from direct-attached storage (DAS) into SANs.
However, these Ethernet storage technologies have been around for some time. More recent is the development of converged data centre Ethernet, used as a data centre bridging mechanism. This provides a key advantage over conventional Ethernet: trusted delivery thanks to lossless traffic management.
Trusted delivery enables customers to use Ethernet as a carrier protocol to run multiple storage protocols. In particular, the lossless traffic management is crucial for running FCoE. This SAN-focused communications technology has no tolerance for dropped packets.
Ethernet storage in this context uses 10 Gbps Ethernet and higher, with 40 Gbps and 100 Gbps Ethernet now standardised.
The standards for trusted delivery are already in place. The 802.1Qbb standard provides eight separate “pipes” over a single high-speed Internet link for different classes of service, while 802.1Qaz allocates bandwidth dynamically to different virtual channels on a high-speed Ethernet link based on data volume. The 802.1Qau standard moves congestion out of the network core, throttling back traffic from congested switches to avoid congestion spreading on a network.
Statistics suggest that FCoE switches are beginning to take off, but the market is still relatively small. Dell'Oro Group said that growth in the SAN market for the second quarter of 2011 rested heavily on converged network technologies such as FCoE.
According to Dell’Oro, almost half of the 200,000 FCoE ports shipped in the quarter came from Cisco. Cisco’s approach to FCoE with the Nexus 5000 involves decoupling Fibre Channel from Ethernet, then using separate Fibre Channel functionality within the switch to determine the next route.
Brocade claims to offer something different, condensing Fibre Channel and Ethernet together in what Andre Kindness, senior Forrester analyst, calls a “flattened” network. Brocade launched switches in its VDX series last year that don’t decouple Fibre Channel packets from Ethernet but rather forward them natively.
Kindness identifies Brocade and Cisco as the two main players in the FCoE space. There are others, however. Juniper began selling FCoE switches earlier this year, although it faces credibility challenges, according to Kindness. “Juniper can do it, but it’s hard to see them as a major player. Their fabric was out just a month. Nodes and edge switches have been out for six months,” he said.
QLogic already has converged network adaptors (CNAs) that run FCoE without decoupling, and it has standing deals with both hardware and software vendors. NetApp is using its single-chip CNA solution in its products to provide FCoE functionality straight to the SAN, while QLogic also began revenue shipments of its CNA products to Oracle last year.
Should data centres adopt an all-Ethernet solution in the next couple of years? Much depends on their willingness—and the market’s ability—to serve technologies like FCoE throughout the entire data centre infrastructure.
Cisco envisages a three-stage approach to Ethernet implementation, starting with standalone servers at the network edge. This evolves into “end of row” implementations, where multiple FCoE-enabled servers connect to an FCoE-enabled networking device or blade switch. Phase 3 sees FCoE adopted into storage arrays and tape libraries, enabling the protocol to be exchanged all the way between server and storage without decoupling.
“We are starting to see 10 Gigabit Ethernet on motherboards for high-end servers,” said SNIA’s European technology chair, Gilles Chekroun. “Some storage array vendors are enabling iSCSI and FCoE at 10 Gigabit Ethernet levels. So, today the technology enables end-to-end FCoE, but this is not yet being implemented at a big scale.”
Part of the reason for the relatively slow take-up could be that technology refreshes are less frequent as you move further into the core. But the other part of it could be that existing Fibre Channel simply works, and aggregating server-side connections onto a single CNA for reduced power consumption and cabling requirements provides enough of a benefit right now.
Chekroun believes that Fibre Channel is here to stay, especially as the 16 Gbps standard offers a viable upgrade path for storage and network planners.
“In my opinion, customers can go full Ethernet if they build a new data centre or for specific applications,” he said. “The investment is done once, and then unified fabric helps you to get LAN and SAN functionality on a 10 Gigabit Ethernet cable. It also means no more downtime from the server point of view to install host bus adaptors.”
For greenfield sites, then, end-to-end data centre Ethernet is becoming increasingly viable. But there are many existing data centres that will not merit the infrastructure investment. With a recession fast approaching, the far-reaching structural changes necessary to run Ethernet from rack to core SAN may be a way off.
This was first published in October 2011