Feature

Survive the wait for faster Ethernet

Feeling the pinch with your Gigabit Ethernet network backbone? Having a few too many packet collisions? Network congestion causing you problems? If you are answering yes to these questions, you should spare a thought for Joe Lawrence. The principal architect at fibre optic wide area network provider Level 3 is dealing with traffic an order of magnitude greater, and when he feels the pinch, it really hurts.

Lawrence has to manage traffic volumes in his core network that would make your hair curl. Packets flow through the heart of his infrastructure at hundreds of gigabits per second (gbps).

39863_Snail-s-pace.gif

"It is not a question of whether we will have a need for a solution by 2010, it is a question of when the solution we need right now will show up," he says.

The year 2010 is important for Lawrence because it is the likely ratification date for the 100 Gigabit Ethernet standard, which will let network managers deliver 100Gbits of traffic through a single port. Currently, Lawrence has to deliver 10 times that traffic over a single link, and has to be creative in his solutions.

He is not the only one trying to squeeze a swimming pool through a drinking straw. Internet exchanges, large telecoms companies, major content providers and others are all facing the same constraints, and either need higher speeds now, or they will do very soon.

For many of organisations, the next two and half years will be a long wait. In the meantime, the people responsible for developing the 100 Gigabit Ethernet standard are still in the early stages of thrashing it out, and they have not officially got a working group under way yet. What is going on?

The move to 100 Gigabit Ethernet

Moving to 100gbps is a big leap. The last big increase in Ethernet speeds happened in 2002, with the ratification of the 10 Gigabit standard. However, the internet never stands still, and traffic volumes are increasing substantially.

Thanks to everything from internet video to the massive increase in voice over IP communications, the people that keep the internet running need more capacity. As a result, a collection of suppliers and end-user organisations making up the Institute of Electrical and Electronics Engineers' (IEEE) 802.3 Higher-Speed Study Group (HSSG) began the journey down the long, red tape-ridden road to ratifying a new standard.

HSSG chair John D'Ambrosia says, "The HSSG call for interest, which is how you get the programme started in the IEEE, happened in July 2006. The focus at that time was on a networking perspective, particularly in terms of aggregation.

"We had people from the carrier networks, and people from datacentres participating. We had individuals from the Wall Street stock exchange, and we had the video people coming in too. We saw this recurring theme of traffic growth."

Participants were looking at ways to increase the throughput at the level where high-throughput switches handle large amounts of aggregated traffic from other networking equipment. People such as Lawrence rely on running lots of links in parallel.

Problems of parallel links

This presents challenges of its own, says Val Oliva, director of product management at Foundry Networks. "You will notice that there is a latency difference when traffic is serialised over a big link, as opposed to aggregating over multiple links," he says.

Sending traffic over multiple links also creates synchronisation problems, should packets arrive out of order. "What happens if it is out of order is that the other node will say 'I did not get the packet, please retransmit'."

Add to that the associated management problems and things begin to get even more problematic. Stringing multiple links together to get the bandwidth you need means that you must monitor each individual link for network management statistics, increasing your overhead.

"Running eight 10 Gigabit links in parallel is an operational issue in datacentres where you want to plug and forget," says Lawrence.

But companies who manage networks for a living are prepared to spend extra time tweaking their infrastructure for high performance.

Lawrence manages the packet synchronisation problem using flow-based load balancing equipment, but those load balancers have to look inside the TCP header to manage the traffic, which makes it more difficult to scale.

These are not insurmountable problems - Lawrence is aggregating dozens of 10 Gigabit Ethernet links at a time, and the company has capacity for growth - but it makes his job harder.

So, with unprecedented pressure from the market, why the hold up? The HSSG's job at the outset was relatively easy. It fixed on 100gbps as its target speed and was then able to lay out a core set of technical objectives that would form the basis of its goals. All was going well at the HSSG until another contingent raised issues that complicated the situation.

D'Ambrosia says, "Individuals from Sun said that it did not service the server requirements. Here, we are not talking about network aggregation. We are talking about the servers themselves," says D'Ambrosia.

Accommodating two speeds

Servers have to connect to the network, and some people did not want them to do that at 100gbps. This is because the requirements for core network speeds and for direct communications between servers and networking equipment are growing at different speeds.

The HSSG eventually decided to accommodate both parties by including two speeds within the single standard. None of this surprised former physicist Stephen Garrison, vice-president of marketing at Force 10 Networks.

"The PCI bus in the server is moving from PCIe to PCI2e early next year," he says, adding that this creates the need for a 40gbps connection.

Experts say that whereas many networking firms are asking for 100gbps speeds now, people connecting servers to switches will not need those speeds until about 2015 at the earliest.

So why not simply work on a 100gbps standard and let it cover slower connections by default? "You have to realise that these markets have different cost targets and power requirements.

"If you design for 100gbps and then use it at 40gbps, it does not get rid of the cost and power implications, so we are really optimising for the two rates, rather than simply using the one rate at the slower speed," says D'Ambrosia.

This led to the development of a set of HSSG objectives designed to support both parties. Both speeds have some things in common. For example, they will support full duplex operation only, which, like the 10 Gigabit Ethernet standard, gets rid of the Carrier Sense Multiple Access With Collision Detection (CSMA/CD) packet collision issues that all previous versions of Ethernet suffered from under half-duplex operation.

Unsurprisingly, the traditional Ethernet frame format and frame sizes will be preserved, ensuring backwards-compatibility with existing Ethernet standards.

Where things differ is at the physical layer. Operations of 40gbps should support distances of at least 160km on multimode fibre connections, and at least 16km over a copper cable assembly. The backplane connections in high-speed server chassis will support 40gbps over a distance of at least 1.6km. For 100gbps operations, planners are aiming for distances of 40km on single-mode fibre, at least 160km using multimode fibre, and at least 16km over a copper cable assembly.

These objectives clearly categorise 40gbps and 100gbps inside and outside the datacentre, respectively, says John Jaeger, director of business development at Infinera, which sells equipment for high-speed network links.

Jaeger, like others that Computer Weekly spoke to, has concerns over the inclusion of the 40gbps standard within the HSSG's remit.

"Our customers have made it clear that their business today is being impacted by an inability to aggregate 10 Gigabit Ethernet connections in the network. Our first concern was that we needed to take care of our customers and did not want to delay that activity," he says.

Jaeger worries that accommodating the slower speed will delay the standards process, and others have said that the debate between those focused on server connections and those concerned with core network aggregation delayed the process by between six and 12 months.

Including server-side stakeholders in the standard also raises the question of what will happen to other connectivity technologies in that area, such as Infiniband.

"There is a strong focus on using Ethernet in more of these applications. A lot of guys were building high-performance clusters and started using Gigabit Ethernet," says Brad Booth, president of the Ethernet Alliance. He adds that many high-performance network firms began writing optimised TCP stacks for their own environment.

"There is a strong trend towards a common networking technology," says Booth. It makes for cheaper infrastructure, which is why, for example, the incumbent 40gbps fibre optic networking standard, OC768, will not cut it for many datacentre operators.

"It is more cost-effective to buy four 10 Gigabit Ethernet ports than a single OC768 port," says Lawrence.

What it means for IT managers

In the short term, all this will have little impact on network and IT managers. All that is really known are the objectives for a standard.

The HSSG has issued a project authorisation request - a contract with the IEEE agreeing the scope of work on the project. Once that is accepted, the task force can be created and proposals for the draft standard can then be considered and reviewed.

The drafts will gradually be opened up to a broader base of reviewers, until it eventually makes it to ratification. That will probably take the rest of the decade and, unlike Ethernet itself, the process is one thing that is very hard to speed up.




 


Email Alerts

Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

This was first published in November 2007

 

COMMENTS powered by Disqus  //  Commenting policy