Network protocol wonks have probably never had it so good. Ever since the fibre to the node debate’s eruption into Australian public life, competing standards like WiMax, HSDPA and the various DSLs have almost become items fit for discussion at polite dinner parties.
While the chattering classes are having their debate over network protocol, a parallel debate is taking place inside the nation’s IT departments, although the protocols being discussed have nothing to do with telecommunications policy or consumer broadband. The discussions are no less important, however, because what is at stake is the future of the data centre.
The importance of that discussion derives from the fact that data centres are in no way immune to the general increase in network use. One reason that data centres need more bandwidth is virtualisation, which increases network traffic when it results in a proliferation of virtual machines, each of which creates its own traffic.
Hand in hand with virtualisation is the trend towards multi-core servers, which demand more bandwidth simply by being faster than their predecessors and using that power to place more requests for network resources. Faster servers are also more likely targets for virtualisation, again driving up demand for network resources.
Another factor driving up the desire for bandwidth is the kind of data that users demand these days. A decade ago, it was more or less unheard of for the data centre to be the source of hundreds of simultaneous voice streams, a job that VoIP brings to many corporate networks today. Another innovation is desktop video, which in the form of YouTube or internal training videos is another new load for the network.
The there’s the web, which thanks to web applications can bring all sorts of traffic deep into the data centre as web applications expose the core network to users galore.
IP storage is another important driver of the need for speed. 2008 has been notable for dramatic falls in the price of ISCSI storage, with complete storage area networks now sold for less than $5000. This kind of SAN is expected to become increasingly prevalent, creating a desire for a roadmap promising scalability.
These circumstances and other have made it necessary to devise new networking protocols that bring more bandwidth to the servers and storage devices that occupy the racks of data centres. Some are already here.
Fibre Channel has already accelerated from 2Gbps to 4Gbps. the 8Gbps Fibre Channel standard was signed off in 2006 and products will flow in 2008. Ethernet moved up to gigabit speed in 1999 and is already available in 10Gbps.
Horses for courses
These competing standards gives the industry a good-old fashioned standards war, in which tradition dictates one protocol emerges dominant and the losers are, like OS/2 or Token Ring, reduced to little more than answers to be dredged from the memory on trivia nights.
But this time, it seems that the competing standards could possibly co-exist for some time.
“People who need the upgrade to 8Gbps fibre channel will make the upgrade,” says Joe Skorupa, a Research Vice-President at Gartner. “There are $US50 billion of FC assets out there and the information they contain is more valuable than the storage assets, which are not going away any time in the near future.”
The need to protect that investment in fibre channel, Skorupa says, means that Fibre Channel’s new, fast, incarnation will remain a staple in data centres that have already made the investment in the technology.
“Fibre channel is already there, it is well understood and organisations have the tools to make it work,” he says.
The rival Ethernet camp has other ideas and believes it deserves to become the one protocol to rule them all.
“Any time the network has enabled consolidation of networks it has delivered benefits,” says Doug Gourlay, Senior Director for Cisco’s Data Centre Business Unit, citing multiprotocol WANs or converged networks carrying voice, video and data (and apparently saving money along the way) as the sources of such benefits. Cisco’s argument is therefore, and unsurprisingly, that Ethernet everywhere is a very fine idea indeed that will make the data centre of the future fast and easy to operate.
It may also make it a little cheaper, with a single, consolidated, Ethernet network touted as prima facie simpler, cheaper and more manageable than twin Ethernet and Fibre Channel networks. Fibre Channel has also developed a reputation as an esoteric technology with a concomitantly small pool of skilled labour that therefore attracts high wages, compared to the large pool of workers with Ethernet expertise. Even the green argument has been pressed into service for the move to a single Ethernet network, with one Ethernet switch said to need less electricity than one Ethernet and one Fibre Channel switch.
But Ethernet still has some issues, notably its heritage as a “best effort” technology that is relatively unfussed by dropped packets. Indeed, Ethernet’s ability to recover from dropped packets and other transmission hiccups is one of its major strengths, but is also a weakness in the new world of voice and video where a quarter of a second’s delay can ruin a call or spoil a videoconference. In the data centre, similar delays are unacceptable.
Various efforts are therefore under way to harden Ethernet so that it can deal with today’s traffic, with recent quality of service enhancements such as priority groups, per priority pause and congestion notification all hardening the protocol to make it less likely to damage data delivery in ways that currently leave it less-than-optimal for data centre deployment.
Another effort is called Data Centre Ethernet and seeks to further harden Ethernet though a variety of initiatives.
Meet the hybrid
Of course Fibre Channel has had this kind of strength for a considerable time, leading many to wonder just why anyone would bother adopting Ethernet at all. The reality, however, is that Ethernet has such penetration, momentum and, thanks to existing work that will speed Ethernet to 40Gbps and 100Gbps, a roadmap that users find attractive. Fibre Channel has nowhere near the adoption of Ethernet, but thanks to its installed base and ruggedness has many fans.
The industry’s answer to this meeting of an irresistible force and an immovable object is a hybrid that uses the strengths of each protocol, dubbed Fibre Channel over Etherent (FCOE).
FCOE would see the lowest, physical, layer of the Fibre Channel stack handed over to Ethernet. Ten Gbps Ethernet is the preferred Ethernet flavour, as it gives Fibre Channel a path to scalability.
Currently being drafted, FCOE has the support of a broad coalition, including the main Fibre Channel booster, Brocade.
“Brocade is actively developing FCoE to the extent that when customers want to integrate Data Centre Ethernet into their networks they have the ability to do so,” says Jason Lamb, a Brocade “Solutioneer.”
Others are keen because it gives more reason to adopt 10 Gbps Ethernet or, among server vendors, because it reduces complexity by making it likely that future servers may need only one interface card instead of being able to handle Fibre Channel and Ethernet cards.
Overshadowing all of these arguments, meanwhile, is the fact that FCOE has not (at the time of writing) become even a draft specification. Nor are all of the enhancements to Ethernet complete.
Cisco laughs off the latter situation.
“If it is good enough for [the US emergency services hotline] 911, we have proven through market performance and tech that we have a leadership position in both and have the scale and ubiquity of Ethernet with the robustness of Fibre Channel,” says Cisco’s Gourlay.
Gartner’s Skorupa has different objections.
“There are six standards to be signed off before FCOE is complete,” he points out. “Anyone who buys in early runs the risk of being locked into a proprietary system.”
He also worries, however, that hardened Ethernet is no silver bullet. “There are 12 new ASICS and six million new lines of code in Cisco’s new router,” he says. “How happy would you be to install that?”
Slorupa adds that “It has been a bad bet to bet against the economics of Ethernet, but what we are looking at now is organisations where Ethernet is not the familiar thing.”
“Those orgs have strong political issues to be dealt with. And while they can make strong arguments that they can save a little money they will ask if it is worth it to put all their core assets at risk.”
One reason Skorupa suggests users are being asked to make that decision is vendor self-interest. Vendors, he suggests, are keen on Data Centre Ethernet because it gives them something new to sell and, in Cisco’s case, put pressure on Brocade.
“They like it because it forces you to replace your entire data centre core,” he says.
Shareholders of companies operating large data centres could conceivably, therefore, begin to have a reason to expand their knowledge of networking protocols. Discussing the relative of Ethernet and FCOE may not yet be sexy for the masses. But if business is herded towards multimillion dollar data centre transformations, they may yet become worthy of wider discussion!