A guide to Fibre Channel over Ethernet

Feature

A guide to Fibre Channel over Ethernet

When Joe Mathis worked at IBM, there was still one problem that the IBM master inventor had not solved. He had spent his time designing ultrafast workstations that he liked to call supercomputer desktops. The problem was that he had never been able to design an I/O mechanism that could feed the computers data fast enough. Another problem was that the high-speed communications systems available tended to work over very short distances, he says.

"Then, I heard about this thing called Fibre Channel," recalls Mathis, who now works at virtual infrastructure company Virtual Instruments. He realised that this could be a high-speed I/O system that could break the physical restraints associated with current high-speed connectivity and became one of the original architects of the standard.

He presented his paper to the Fibre Channel working committee 20 years ago. Now, Fibre Channel is evolving. The International Committee for Information Technology Standards (INCITS) has a group called T11 that works on fibre standards. In 2007, the T11 group began working on a standard that would enable Fibre Channel to be transported over Ethernet. In June this year, INCITS finalised the standard for submission as a draft to the American National Standards Institute (ANSI).

Converging networks

Why would anyone want to run a perfectly good high-speed storage networking standard over Ethernet? One of the biggest drivers is consolidation, explains Stuart Bridger, Emea service and support manager for storage and SAN systems at value-added distributor Avnet Technology Solutions.

"The objective is to converge networks," he says. "It is about simplifying the multiple networks in today's datacentres. You have your SAN network, which is likely to be on Fibre Channel, separate LAN networks, and possibly even a separate high-performance computing network, each with their own physical networks, management platforms and isolated cabling systems."

Since the major chip suppliers turned their focus to multiple cores, rather than simply bumping up clock speeds, we have experienced runaway computing power. With some vendors now offering 32 cores, the I/O requirements for the average server are increasing. The idea of running Fibre Channel and Ethernet separately becomes more problematic in this case, as customers find themselves increasingly bogged down with multiple network interface cards and host bus adapters (HBAs are the interface devices used to connect servers to SANs over traditional Fibre Channel links).

Consolidating Fibre Channel links onto high-speed Ethernet networks will reduce the complexity and management overhead of cabling, as well as cutting the number of interface devices needed in a server.

Analysts predict moving from a higher-end, higher-margin protocol to a more commoditised protocol can reduce costs significantly. "The cost will be reduced by a factor of ten, because you are moving from proprietary to non-proprietary systems," says Keith Humphreys, managing consultant at networking analyst EuroLAN.

According to the Fibre Channel Industry Association, the average PCI Express adapter uses around 25 watts. Given that heat overhead is the biggest barrier to server expansion in a rack, the ability to cut the number of adapters in a server could enable a datacentre manager to use the rack space more effectively. Although we must not forget that a 10Gbit adapter is likely to use more energy and generate more heat than a 1Gb NIC.

Fibre Channel as used in the protocol has no idea that it is running over Ethernet at layer 2. All of the things we normally see in Fibre Channel, such as low latency characteristics, security and traffic management attributes, remain when using it over Ethernet, which was not originally designed with such things in mind.

The traditional Fibre Channel protocol has five layers, known as FC-0 through FC-4. FC-0 and FC-1 represent the physical and data encoding layers respectively, and it is these that are replaced in an Fibre Channel over Ethernet (FCoE) implementation. Ethernet's physical and MAC address layers, which represent the physical and data link layers in the traditional OSI stack, take over here, leaving the upper three layers of fibre channel to run over the Ethernet link. These three layers are framing, services and protocol mapping.

Data flow

The big difference between Fibre Channel and Ethernet is that the former is lossless, so all of the data sent is guaranteed to arrive at the other end, whereas Ethernet is a lossy protocol, so data packets sometimes have to be resent. This is not acceptable in a low-latency SAN environment, so architects developing FCoE had to find a way around it.

Currently, the best way to prevent the problem is to use a feature of Ethernet that enables a receiving port to send a pause request to a sending port if it is too busy to receive traffic. However, a priority flow control enhancement is being introduced that will enable Ethernet pause capabilities to be merged with quality of service capabilities, making lossless Ethernet more aware of user priority status.

The industry has developed a name for the type of datacentre-optimised ethernet needed to fully support FCoE. It calls it converged enhanced ethernet (CEE), although Cisco has characteristically adopted its own terminology, calling it datacentre ethernet (DCE).

Until these issues are thrashed out, Bridger says that FCoE will be a "top of rack" solution, limited to the first few feet of the datacentre network, but as flow control issues are sorted out in datacentre-class ethernet, we will begin to see the deployment of end-to-end FCoE networks, especially as storage arrays begin to include native FCoE capabilities.

Although some commentators view FCoE as a protocol that will operate over an entirely separate 10Gbit ethernet network, Bridger sees the real value of the protocol when used in conjunction with an array of virtualised machines. Ideally, storage array traffic would run over a high-speed Ethernet network in conjunction with virtual machine traffic from a single server, he says.

However it is implemented, FCoE is likely to have a significant effect on the datacentre network, and on the industry in general, over the next few years. It is definitely a technology to watch.


Major FCoE players

Datacentre networking specialist Brocade set out its FCoE roadmap just over a year ago, before the standard was officially ratified. The company has launched products that bring FCoE and Converged Enhanced Ethernet together, and is working with specialist high-speed ethernet vendor NetApp to ensure their products are compatible.

Cisco believes that FCoE will drive adoption of 10Gbit Ethernet networks. The company set out its own FCoE stall with the Unified Computing System (UCS), which heralded its entry into the server market, and which uses the standard for internal communications. It has also promised FCoE communications with external devices in a future software upgrade. It also offers the Nexus FCoE switch.

Emulex sells host bus adapters that enable Fibre Channel storage arrays to connect to Fibre Channel switches. It has entered the FCoE race with converged network adapters designed to enable Fibre Channel connections to high-speed Ethernet networks. Emulex was the subject of a takeover bid by Broadcom, which failed, and is now suing Emulex over a patent dispute. Broadcom is a fabless designer of networking chips, but has little or no intellectual property in the FCoE space.

QLogic is a manufacturer of converged network adapters. It launched a second-generation FCoE-capable CNA earlier this year, featuring a single basic for multiple networking functions including FCoE.


Slow FCoE adoption

Fibre Channel over Ethernet (FCoE) was only ratified by INCITS in June, so implementations are limited. Eric Sheppard, Emea programme director for storage research at IDC, believes its adoption will be relatively flat because of the economic situation.

"We are seeing that some of the expected adoption of FCoE has been pushed out until beyond the recession," he says. He expects people to become more interested in the technology in the second half of next year.

Nevertheless, given the cost saving benefits associated with the new protocol, it is not surprising that some brave customers have taken steps to make themselves early adopters. LA County, one of the biggest counties in the US, announced in January that it had taken the plunge.

The county, which was already an existing Cisco customer, replaced eight Catalyst 6500 Ethernet switches with two of the networking supplier's Is 7000 FCoE switches in the datacentre core.

LA County, which has virtualised its servers and is redesigning its datacentre, needed a new design to accommodate its alterations. It settled on FCoE running over a 10Gbit Ethernet fabric.

Spokespeople within the county's IT department argued that the move would cut power consumption by half, while also using converged network adapters to replace ethernet NICs and Fibre Channel host bus adapters.

But that was back in January. The IT department at LA County has since gone very quiet. When we called them to find out what was going on, the tight-lipped staff simply said that there were "issues" around the implementation.

What might those issues be? One hopes that interoperability is not the problem. When Fibre Channel was initially ratified, it suffered from interoperability issues as suppliers implemented the protocol differently and customers found it difficult to bolt their equipment together. But experts argue that interoperability should not be as big a problem with FCoE because it already uses two well-established standards.


Email Alerts

Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

This was first published in October 2009

 

COMMENTS powered by Disqus  //  Commenting policy