The approach to managing physical data centres has had to continually change, as the IT industry has moved frommainframe through midi computers, tower computers and on to rack mounted servers and blades. However, even the old way of doing things -- one or two applications per server -- meant relatively low stress levels in data centres when it came to power requirements and heat production, even as overall equipment densities increased.
Virtualisation, however, changed all of this. As virtualisation projects are used as a review point for rationalisation and consolidation of the software, as well as hardware assets, many organisations find themselves not just with higher hardware densities but also individual servers with far higher utilisation. This can end up really stressing power and cooling capabilities, if you design a virtual date centre wrong.
How not to design a virtual data centre
In many early implementations of virtualisation, Quocirca saw how simple approaches of trying to stack similar equipment in specific areas led to disastrous results.
Clive Longbottom, Contributor,
Indeed, in many early implementations of virtualisation, Quocirca saw how simple approaches of trying to stack similar equipment in specific areas led to disastrous results. For example, placing all power converters in one stack, all CPU capability in another and all storage in yet another led to severe problems at the power converter level. Removing heat effectively from such a high dissipative source would require highly specialised cooling systems to ensure that overheating would not occur.
The majority of vendors and data centre implementation partners will have built up their own "best practice" capability in how to architect a platform for a highly virtual environment. Even these, however, may not take everything into account when it comes to building the overall best data centre design (.pdf) that looks not only at the main assets, but also at the interconnects, the incomingpower distribution and the need for a dynamic capability to grow and shrink resources to meet the workloads.
Power distribution as part of your virtual data centre design
Firstly, power distribution has to be looked at. In any virtual data centre, power access has to allow for continuity, so a single source and single distribution board should be avoided. Multiple sources distributed around the data centre will provide not only greater capabilities to deal with any single point of failure, but will also help in another important area: structured cabling.
For a high density environment, cable management can be a major issue. Quocirca recommends that underfloor cabling in a virtual environment be avoided and that power and data cables be kept away from each other. By doing this, cabling can be more easily maintained, and crosstalk between power and data cables is minimised.
Rack engineering also has to be taken in into account. A mix of power, CPU, storage and networking units in one rack is not a problem. It enables high-heat, dense units such as power converters to be mixed with less dense and less hot units, such as a router or switch. This, then, enables cooling systems to be better engineered: Approaches such ascold/hot aisles and forced, ducted cooling will minimise costs while maximising cooling efficiency. Indeed, a well engineered cooling system, as part of your virtual data centre design, can provide outlet heat that can be used elsewhere in a building for space heating or, via a heat pump, hot water.
Although a well-architected virtual environment can lead to increased system availability, a basic system can reduce it. A move from a one application per server to a "many virtual applications per physical server" model can mean that the failure of a single physical server can bring down many applications or services, essentially bringing an organisation's IT services to its knees. Using virtualisation to create "n+fractional" resilience, however, should be considered.
In a physical setup, availability is generally provided through clustering and/or mirroring. At the simplest level, this will involve an "n+1" approach -- the servers (and other assets) that are in use will require at least one extra asset to be in place to ensure a degree of availability. In fact, many organisations use an "n+m" approach, with multiple backup systems in place to provide higher levels of availability. Bearing in mind that in pre-virtualisation environments the majority of Windows-based servers are running at less than 10% utilisation, "n+1" will make it 5% (if n=1), and "n+m" will drive it down even further.
Active and passive images
In a virtual environment, active or passive hot standby images can be used. An active image means a fully functional, fully working image ready to take over, complete with mirrored data and other resources. This is similar to an "n+1" approach, but as the image is virtual, it only uses a fraction of a (separate) physical server. For a passive image, this can be a very "thin" image: The application is available, but no other resource is attached. On the failure of the main image, resources are rapidly provisioned to the backup image,and it takes over in a short period of time -- typically a few seconds.
Either image will optimise utilisation and lower the costs of having a highly available single data centre. For ultimate availability, an "n+fractional" multi-data centre approach may also be required. This can be provisioned through using an on-demand IT infrastructure or a third party colocation provider's data centre facility as a primary failover site.
Virtualisation gives us an opportunity to review a data centre's architecture and to ensure that a highly effective, dynamic and responsive infrastructure is put in place. As in architecting a building, when you design a virtual data centre, it is function that has to win over form. For a data centre, bear in mind that the function is variable: The form must be enabled to allow for this.
Clive Longbottom is a service director at UK analyst Quocirca Ltd and a contributor to SearchVirtualDataCentre.co.uk.