Grid computing and server virtualisation could be answer to problem of bottlenecks

How many people do you know that are deploying 10 Gigabit Ethernet widely?

How many people do you know that are deploying 10 Gigabit Ethernet widely?

Although the carriers have requirements for 100 Gigabit and beyond, the other 99.99% of IT users are just settling into the idea of Gigabit Ethernet being something of a luxury. Where 10 Gigabit makes sense is in a set-up with several gigabit feeds coming from servers and other networking devices.

In contemporary networks, throwing more bandwidth at a bottleneck is not solving the problem. This is because the bottlenecks are where bandwidth cannot help - at the server and at the network edge.

At a recent Netevents industry symposium, Gartner research vice-president Ian Keene said, "We have got to the point where network managers cannot just throw bandwidth at the problem any more. It is not going to work and they are going to have to put some more thought into the problems."

So what alternatives are there? After many years of development and false promises, we have a real alternative to the way data is handled at one of those "pain points" - the server - in the form of both grid computing and virtualisation of servers. False promises because it is over 15 years since I recall seeing the first attempt at grid computing and we have all been waiting every since. But now it is here and it appears to work.

Oracle has got heavily into "the grid" since its 10g software release. It sees this method of allowing combined server resources to be made available in a grid format, akin to the electricity power grid, as a big step forward for server optimisation.

At the same time, the virtualisation of servers, with EMC-owned VMware leading the way, is allowing for the breakdown of the classical 1:1:1 relationship between server front-end, operating system and storage. This results in a breakdown too of the geographical requirements of that hardware; the physical bits of tin need not reside in the same machine room or operations centre, or even, theoretically, the same continent. Not only does this introduce huge amounts of flexibility to the IT manager in the way applications are resourced and server time allocated, but it also optimises server utilisation, in theory, 100% of the time.

But what is the impact on the network as a result of this? Andy Cleverly, director Oracle technology marketing EMEA, said grid computing, as it has been deployed to date, has not required any kind of significant topology changes to the network, nor any extra bandwidth to be consumed. The focus so far has been around server consolidation and lowering operating costs.

Site failover or disaster recovery is another emerging application for geographically dispersed grids, indicating that point-to- point wide area network connections, rather than Lan connections, are where the key band- width requirements are set to be.

It is much the same story with virtual servers. According to Richard Garsthagen, technical marketing manager VMware EMEA, there is no reason why virtual servers could not be spread geographically across continents, especially where an application can support asynchronous, rather than synchronous replication - ie, all the virtual servers do not have to be identically up to date at all times. Otherwise, bandwidth does become the limiting factor, again at the Wan or internet level.

Optimisation also comes in other forms. For example, with VMware's Vmotion technology, it is possible to move a user-intensive virtual server from one physical server to another, without any downtime. Another optimisation benefit is in continuously balancing workloads across the data-centre to most effectively utilise resources in response to changing business demands.

Although a server would typically require one or two Ethernet interfaces, in a virtual scenario this would usually require a four-port adapter. The VMware approach means less physical servers are required, so the overall port count is more likely to go down, not up. And the bandwidth requirement is still the same.

The fundamental problem is that there are not that many "super clusters" around that need 10 Gigabit pipes between them. Most servers do not put out that much so Gigabit Ethernet pipes are fine.

It is far more effectiveto optimise what is already there, rather than put fatter pipes in place. And as traffic becomes more, say, XML-oriented, the ability to intelligently route and accelerate that traffic (eg, strip off unnecessary bits) outweighs the benefits of simply providing more bandwidth. And, put simply, the backbone always needs significantly more bandwidth than to the desktop, or per server.

So 10/100/1,000 feeding across the Lan, into a 10 Gigabit backbone makes sense. As, in the future at some point, so 100/1,000/ 10,000 into 100 Gigabit will, but not yet.

Broadband-Testing Labs

Steve Broadhead runs Broadband-Testing Labs, a spin-off from independent test organisation the NSS Group.

Author of DSL and Metro Ethernet reports, Broadhead is now involved in several projects in the broadband, mobile, network management and wireless Lan areas, from product testing to service design and implementation.

www.broadband-testing.co.uk

Read more on Business applications

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

-ADS BY GOOGLE

SearchCIO

SearchSecurity

SearchNetworking

SearchDataCenter

SearchDataManagement

Close