100Gbps Ethernet – is it time to move?

Feature

100Gbps Ethernet – is it time to move?

Do you need 100Gbps Ethernet? The real questions seem to be whether you are willing to pay for it and, given the growth of traffic within the datamcentre, whether it is unavoidable.

The standard was ratified in 2010 as IEEE 802.3ba. It was invented by Bob Metcalfe and Dave Boggs at Xerox PARC in 1973 and offered what was then a massive 2.94Mbps.

Datacentre.jpg

Fast-forward 40 years and 1Gbps Ethernet (1GbE) is commonplace in the LAN, the chips that power Ethernet ports are efficient and almost free, and 1GbE is good enough for most users' needs.

Changing traffic patterns

In the datacentre, however, it is a different story. According to Cisco, global datacentre IP traffic will nearly quadruple over the next five years, growing at a compounded annual growth rate of 31%.

There are a number of reasons for this, but a huge growth in both the volumes and types of traffic flowing through the datacentre plays the biggest part. As data becomes richer and grows in size – due to the increase in always-on mobile devices, centralised enterprise desktops or explosion in cloud computing, to name a few – more pressure is felt at the back end.

Virtualisation too has profoundly changed datacentre data patterns. Traditionally, traffic consisted of external sources making requests from a server inside the datacentre, followed by the server's response. This model led to three-tier networks, with access, aggregation and core switch layers becoming the default datacentre network topology, and traffic density being lowest at the access layer, becoming greater as you moved closer to the core.

This has all changed. Cisco's global cloud index report showed 76% of traffic now stays within the datacentre. Much of the traffic now relates to the separation of functions allocated to different servers inside the facility, such as applications, storage and databases. These then generate traffic for backup and replication and read/write traffic across the datacentre.

Over time, 100GbE will become the core to replace 10GbE, with 10GbE becoming the replacement for 1GbE

Clive Longbottom, Quocirca

Technologies, such as VMware vMotion, move virtual machines around the datacentre. Storage virtualisation, which among other attributes makes multiple arrays appear as a single entity, also increases the volumes of automated transfers of large chunks of data across the facility. Parallel processing then divides tasks into chunks and distributes them.

Today's web pages are more complex than the source to server and back again requests, leading to data accesses for a single web page from multiple locations within the datacentre. 

In addition, according to Shaun Walsh, senior vice-president of marketing at Emulex, applications such as high-speed financial trading, analytics and credit card transaction sorting are creating lots of small communications that stay within the rack and need high-speed connectivity.

Reshaping datacentre networks

Datacentre networks are starting to alter to accommodate these changing traffic patterns. Instead of an architecture designed to take traffic from servers to the edge of the network and vice versa – known as north-south traffic – they are becoming fabrics, with all nodes interconnected, enabling high-speed bandwidth for east-west connectivity as well.

These datacentre network fabrics are driving the need for huge amounts of additional bandwidth, according to Clive Longbottom, founder of analyst firm Quocirca.

"The need to deal with increasing east-west traffic will mean that 100GbE is the best way to deal with latency in the datacentre, provided the right tools are in place to ensure that it all works effectively,” he says.

Ethernet speeds in use

“Over time, it will then become the core to replace 10GbE, with 10GbE becoming the replacement for 1GbE [standard cascade technology]," adds Longbottom.

So new traffic patterns and network fabrics are driving demand for bandwidth – but are network technologies ready to meet this challenge? When it comes to servers, the answer is no.

Servers remain unready

According to the IEEE standards body in its Industry Connections Ethernet Bandwidth Assessment report in 2012, there is a big shift underway from 1GbE Ethernet in the datacentre to 10Gbps. Its research showed in 2011 that 10Gbps server ports accounted for about 17% of the total, but were forecast to grow to 59% in 2014 as more servers with PCIe 3.0 buses come online.

In other words, 10GbE is here now – and you can find plenty of servers with four 10GbE ports as standard.

However, with each virtualisation host server needing a minimum of two Ethernet ports for redundancy purposes, for a move to 100GbE to deliver the full bandwidth on both ports, another higher-bandwidth generation of server bus will be needed.

The IEEE report read: "The next generation of PCIe will support 100Gbps in a x8 configuration, but the future of PCIe was unknown at the time of writing this assessment. It was suggested that PCIe 4.0 would enable dual 100Gbps Ethernet server ports starting in 2015."

The PCI-SIG, the steering body for PCIe, supports this assessment. "The final PCIe 4.0 specifications, including form factor specification updates, are expected to be available in late 2015," he says. 

High-end servers today include 10GbE ports, but the IEEE projects that 40GbE will supplant those in the next year or so, with 100GbE appearing around 2015. However, Emulex's Walsh says even PCIe 4.0 will not be adequate for the needs of 100GbE.

"Intel is talking about a new replacement for PCIe 4.0 that won't arrive before 2017-18," he says, which will better handle the requirements of higher-bandwidth network technologies.

But the IEEE concerns itself mainly with bandwidth requirements, less so with whether that need could be met by a single technology. Today, racks of servers routinely deliver and receive hundreds of gigabits per second using aggregation, usually of 10GbE or 40GbE links.

Supercomputer centre opts for 40GbE 

Aggregating 40GbE technology is fraught with "gotchas", according to Tim Boerner, senior network engineer at the University of Illinois's National Centre for Supercomputing Applications (NCSA).

The NCSA uses its compute resources for calculations such as simulating how galaxies collide and merge, how proteins fold and how molecules move through the wall of a cell.

The NCSA's Blue Waters project houses some 300TB and needs to move 300Gbps out of the server clusters across an Ethernet fabric to an archiving system or to an external destination. It uses Extreme Networks' 40GbE technology and the link aggregation protocol (LACP) to move its large volumes of data.

Boerner says aggregation can deliver "good [performance] numbers with sufficient transfer”, but “what you run into isn't a limit of the network but the difficulty of juggling how I/O is assigned to different CPUs and PCI slots".

Despite the difficulties associated with aggregation, the NCSA made a conscious decision not to go for 100GbE. "It would be a huge expense to go for 100GbE and to have all the peripherals on that system," says Boerner.

He finds the NCSA being a research department an advantage when setting up such networks. “We have a lot of skills at performance tuning and can take time to meet those goals in a way that an enterprise might not find cost-effective. Having come from private industry, I know they want it to just plug in and work," he says.

"These switch-to-switch links, access to aggregation and aggregation to core, are where we’ll see the bulk of 40GbE and 100GbE used in the near and medium term," says Brian Yoshinaka, product marketing engineer for Intel's LAN Access Division.

100GbE switches ready to go

Major network suppliers, such as Cisco, Juniper and Brocade, already sell 100GbE-capable switches and routers, mostly aimed at datacentre fabrics for users such as telecoms operators and cloud providers. Large enterprises now have the tools to create fabric networks, and as the technology exists, it is possible to build such a high-speed network.

But 40GbE switch ports cost $2,500 each, on average, making 100GbE far from the cheap option.

An alternative is link aggregation, a technique used by many, including the University of Illinois's National Centre for Supercomputing Applications (NCSA). One limitation to aggregation is that all the grouped physical ports must reside on the same switch, although there are ways around this. There is another, more fundamental, problem though.

"In the short term, fabric networks can hide the need [for 100GbE] by aggregating links to create any needed bandwidth," says Longbottom. "Those who aren't looking at fabric probably aren't the sort who will need 100Gbps any time soon.

“But if you are sticking with Cat6 Ethernet, there are only so many ports you fit into a space, and if you are taking up 10 of these to create the equivalent of one port, it becomes expensive in space, cables and power to carry on that way," he says.

Enterprises are only now starting to buy 10Gbps ports in significant numbers, so prices of faster ports are likely to remain high in the near future. Additionally, the cost of peripherals and technologies such as packet inspectors and management tools will also stay up and will need to be made 100GbE-capable.

So, 40GbE is a more cost-effective route in the medium term. Also, datacentre network managers may be expecting the enterprise to outsource significant volumes of traffic to a variety of cloud providers, relieving them of the need to think about and pay for expensive network upgrades.

Those opting for 100GbE are likely to have armed themselves with a very powerful financial case for investing in the technology at this stage.


Email Alerts

Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

This was first published in April 2013

 

COMMENTS powered by Disqus  //  Commenting policy