Is the shine rubbing off Gigabit Ethernet

The idea of 10 Gigabit Ethernet was seen as the glittering solution to network problems: a bigger, faster, even fatter pipe that...

The idea of 10 Gigabit Ethernet was seen as the glittering solution to network problems: a bigger, faster, even fatter pipe that maximises a network's performance. But does the average IT department really need such capacity, asks Philip Hunter

If someone had scripted the Ethernet story 16 years ago when the technology first hit the commercial IT world, no one would have believed it. Time and again it has withstood challenges from apparently more efficient, better performing, network transmission technologies, such as Token Ring, fibre distributed data interface and asynchronous transfer mode (ATM). And in all cases Ethernet has seen them off by reinventing itself.

Some would say the success is illusory, arguing that the original 10 megabits per second (mbps) shared Ethernet running over coaxial cable, and the emerging 10 gigabits per second (gbps) for fibre transmission, which is 1,000 times faster, resemble each other in little more than name. The underlying physical transmission has been transformed to meet far more stringent timing requirements, and it was once argued that Gigabit Ethernet was less like Ethernet than ATM.

But the question is, should users care? It is becoming clear that the answer is no, given that the industry has solved most of the technical problems, and indeed ensured that there is at least some continuity in management between the various iterations of Ethernet.

For IT and network managers, the onus now is on following the various developments with the emerging 10 Gigabit Ethernet standard. While they may not need these speeds now, it is going to have a big role in emerging broadband networks, at least in the mid-term to immediate future. It may, in fact, turn out that 10 Gigabit Ethernet is the last of the long line of Ethernet improvements and that further increases will be achieved by developments at the physical optical layer, notably with dense wave division multiplexing.

But, meanwhile, Ethernet has been lucky, benefiting from parallel improvements in transmission technologies, plus commercial developments that mean it is now poised to take over in the wide area network (Wan) as well, having already won the battle in the local area network (Lan). In this way Ethernet at last promises to provide an end-to-end transmission protocol carrying data in the form of IP packets. ATM emerged as a likely candidate to provide this ubiquity during the early 1990s, but technical and commercial developments since then have worked against it.

As Fred Engel, chief technical officer of the US networking system supplier Concord Communications points out, ATM was developed for an era when the Wan was a serious bottleneck with limited bandwidth, and there was a need for a protocol that could optimise traffic efficiently, and guarantee quality of service for those real-time applications such as voice and video that could not tolerate much delay. To provide this optimisation, ATM had to be complex and difficult to manage. And now, with Wan bandwidth becoming plentiful because of all the fibre being laid, this is a price no longer worth paying, says Engel.

Their complexity made ATM switches, and the network interface cards for attaching systems to the network, expensive. This issue of cost also hindered ATM's penetration into the Wan, according to Roger Hockaday, vice-president of strategic marketing for Alcatel. Hockaday's view is that ATM could only have prevailed if suppliers of ATM systems had adopted the same sort of enlightened approach that the mobile phone industry has taken with handsets, subsidising them to make them affordable and create the critical mass that would have led to subsequent economies of scale.

"The business model was wrong, with ATM costing five times as much as Ethernet for comparable performance," says Hockaday. "The general rule is that people will pay for added value, but only at a premium of 20% or so."

Similar considerations are now giving Ethernet a leg-up into the Wan, and at 10 gbps the next variant looks likely to be adopted by carriers and service providers that currently use ATM widely within their wide area switching infrastructures. This will enable them to change the shape of networking by providing high-speed Ethernet interfaces allowing customers to extend their existing Lans over long distances with no bottleneck and no sacrifice in performance.

At first sight the only difference this appears to make is one of speed. After all Lans have been interconnected for well over 12 years, with routers from suppliers such as Cisco usually used. But the routers have always provided a point of discontinuity, with different protocols such as frame relay and ATM often providing the wide area transmission. The physical Lans at each site had to be managed as separate entities, and although it was possible for a user on one Lan to access say a server on another at a remote site, this required a layer of processing and management that both increased complexity and reduced performance.

With the growing dispersion of teams and workgroups over multiple sites, this increased the number of complaints from users about the network being slow. The idea of the virtual Lan emerged to describe a fluid workgroup that could be dispersed in this way, but this comprised little more than a set of tools to ease administration. Nothing changed beneath the covers.

But if it is possible to extend Ethernet Lans with no break either in protocol or bandwidth over long distances, than the word Lan really does become a misnomer. And the long-awaited dream of the unified end-to-end network will have become a reality. This is a little way off, but progress is rapid, and to judge by analysts' predictions, will come to pass within two to three years. According to Gartner, the worldwide market for the top end 10 Gigabit Ethernet switches, which has barely started as the standard is only just firming up, will kick off properly in 2001 with an estimated $71m (£44m) in sales. The market is then projected to increase tenfold to be worth $700m in 2002 and reach $3.6bn in 2004.

Assuming that such projections are right, and that 10 Gigabit Ethernet does take over the networking world, providing end-to-end data transmission, some issues will be raised and questions begged for IT managers.

An obvious question is how to migrate from existing enterprise networks, and whether equipment such as routers will need replacing. At first sight routers, whose role has been to connect multiple Lans together over either short or long distances, will become redundant as networks flatten out over what in effect would be one huge geographically dispersed Lan.

But at second sight, the role of routers, while changing in some respects, will be as important as ever. There will still be a need for a wide range of control, management and security functions, many provided by routers which, after all, have expanded enormously in scope since the early days when they just ensured that data got to the right destination Lan.

Routers of some form will also need to be retained for another reason, which is to provide buffering capacity at each site to hold data about to be transmitted in case it needs to be resent. The point here is that, occasionally, packets are dropped during transmission and need to be resent. For this to be possible copies of the packets need to be held temporarily within a buffer until the destination confirms that they have been received. The required size for such buffers increases in proportion to both the transmission speed and the time taken to reach the destination.

The latter also increases over longer distances and so will be greater with the new wave of wide area Gigabit Ethernet services. And 10 gbps is very fast. Therefore, according to Bill St. Arnaud, senior director of network projects for Canada's advanced Internet development organisation, CANARIE, current router/switches, which already have large buffer memories, will predominate in the 10 Gigabit Ethernet market for customer premises equipment.

There will continue to be some complexity to deal with in managing end-to-end 10 Gigabit Ethernet networks, and partly for this reason there is likely to be a shift towards greater outsourcing, often to the carriers providing the long-distance Gigabit Ethernet services.

This beg the questions of how soon most enterprises will need 10 Gigabit Ethernet. To an extent this projected surge in network transmission capacity rekindles the old debate over whether the best response to that perennial moan from users, "the network's too slow" is to throw bandwidth at it or improve efficiency. As Marc Droulez, chief technical officer of Avaya, formerly the enterprise network group of Lucent Technologies, points out, there is a tendency to fill the bandwidth available.

But even this tendency will take time to consume even the single gigabit capacity backbone networks that are in place within many enterprises today, and for this reason Droulez expects there will be a relatively small niche for 10 Gigabit Ethernet in the next year or so, very much in line with Gartner's initial projections.

"There's always been a subset of customers throwing bandwidth at the problem," says Droulez.

A somewhat different view is taken by Kevin Johnson, divisional director for IP business solutions at network system supplier Getronics. Johnson argues that precisely because applications tend to fill the available capacity within a purely managed network, it is important to try to thwart that tendency and avoid continually throwing money at the problem.

"Today's 100 mbps or 1 gbps Ethernet networks will probably be pressurised with voice, data and video, and we will need to look at how you manage that data," says Johnson. "Yes, if we had 10 Gigabit Ethernet, we'd be able to back off that for short time, but the way the trends are going, the focus needs to be on management.

Concord's Engel vehemently disagrees with this line, but irrespective of how this debate goes, there is one group of enterprises that will definitely soon need Gigabit Ethernet. This is the Internet service provider (ISP) sector and particularly concerns the Internet Exchanges that act as interconnection points between different ISPs. For example the London Internet Exchange (Linx) will soon be implementing 10 Gigabit Ethernet as a matter of urgency, according to its executive chairman Keith Mitchell.

"Currently, we run at 3.2 gbps within our network infrastructure, and with demand doubling every 100 hours, it doesn't take a mathematical genius to see that we will quickly run out of capacity," says Mitchell.

ISPs whose networks are interconnected by Linx do not yet need 10 Gigabit Ethernet, but it will only be a matter of time before many do, Mitchell reckoned.

One point worth clarifying is the fact that 10 Gigabit Ethernet will not immediately provide the full 10 gbps capacity over a single channel. Initially, 10 gigabit backbones will in fact provide multiple lower speed channels whose sum total is 10 gbps, just as happened at lower speeds with single Gigabit Ethernet. At first, therefore, the full 10 gbps will only be available as an aggregate capacity rather than to any single system. Not that anyone would need 10 gbps to their desktop anyway, but it will take a year or two before it will be possible to pump data at 10 gbps down a single channel.

It then remains to be seen whether there will be yet another extension of Ethernet to 100 gbps. It may well be that further increases in bandwidth will be provided by optical technologies, transporting the IP packets directly bypassing the Ethernet layer.

But it is just as likely that there will be yet another twist to the Ethernet tale.

Into the blue

How soon will you need 10 Gigabit Ethernet?

The case for Gigabit Ethernet is undeniable for Internet exchanges, and increasingly for other service providers as an alternative to ATM. But for most enterprises, the need for it can be delayed by making more efficient use of the bandwidth already in place. According to Matthew Bell, product manager at Fluke, one of the world's largest suppliers of network test equipment, many IT managers respond to the familiar call of users, "the network's too slow" by throwing more bandwidth at the problem, when in reality the bottleneck lies elsewhere.

The problem is, they lack the tools to prove it.

"I think there is a real argument to be made that typical current bandwidth levels are more than adequate for the vast majority of situations," said Bell. "What seems to be lacking in some situations is that the network manager doesn't have the time or tools available to get a good idea about how well the network is actually running.

Bell admits that hot spots do occur within networks, where extra bandwidth is needed. But even these hot spots will not need 10 Gigabit Ethernet, and indeed will require single Gigabit Ethernet at the workgroup level.

So really at the enterprise level, 10 Gigabit Ethernet will only figure as a backbone technology interconnecting a significant number of power users with top-end Unix workstations.

Read more on IT risk management

SearchCIO
SearchSecurity
SearchNetworking
SearchDataCenter
SearchDataManagement
Close