Centred on speed: The Road to 10G part 3

In the third part of our series about 10G Ethernet, we explore its impact on the data centre.

Although data centres have come back into vogue, they don't look much like the original glass-walled rooms where mainframes and minicomputers used to live. In those early days of computing, keeping all your processing toys in one room made sense from a physical infrastructure perspective - the provision of the necessary power and cooling was only affordable on a per room basis rather than a whole building. Present day data centres, while still consuming lots of power and cool air, also provide the close proximity required by interconnect technologies.

Application servers today are predominantly connected by 1Gb Ethernet, using low-cost copper cabling, but the need for high speed access to their data stored in SANs using Fibre Channel connections makes a compelling case for co-location. There may also be a server cluster in the data centre providing high performance computing, which will typically be using InfiniBand or some other proprietary high-speed interconnect. The growth in blade-style servers to satisfy the demand for increased virtualisation provides another imperative for locating all the various servers inside the data centre, but these multiple types of interconnect are a constant source of frustration for system administrators in terms of both cost and complexity.

10G Ethernet holds the promise of a converged interconnect for the data centre, but there are challenges for the designers of server backplanes and the network interface cards required to make the dream a reality. Buffering a single second of data at 10G speeds requires more than 1GB of memory - few systems engineers will be happy with the prospect of loading more RAM into their NICs than their servers are using for main memory. For this reason the market eagerly awaits the delivery of high-speed ASICs to offload the processing requirements from the servers. What were once powerful servers can be quickly swamped when fed with data at 10G speeds, and although backplanes are being ramped up to cope, intelligent NICs will be the order of the day in the emerging world of 10G Ethernet, be it copper- or fibre-based.

The allure of lower-cost copper cable for 10G Ethernet may well prove to be short-lived when optical fibre manufacturers achieve their goal of $100 cost per-port, particularly when the promise of re-using the optical cable plant for even higher speeds in the near future is factored into the equation. Copper might well win the war in the data closet for vertical and campus interconnects, but advances in optical interconnects are likely to persuade many systems engineers to adopt a wait-and-see approach before switching to 10GBaseT in the data centre itself.

< Back to Part 2

Read more on Data centre networking

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

-ADS BY GOOGLE

SearchCIO

SearchSecurity

SearchNetworking

SearchDataCenter

SearchDataManagement

Close