We know that organisations are becoming more focused on the power usage across the various aspects of their business. For those companies with significant datacentre operations the consumption of power by IT is getting especially close attention. In fact, the recent rises in energy costs and the expectation that costs will remain erratic, but continue to move upwards means that a CIO not looking at power usage within the datacentre is heading for problems.
The first challenge - as research shows - is that few CIOs have any idea of the power that a given datacentre uses in total - never mind to any level of granularity. If a datacentre is housed in the same building as other business functions and does not have its own power meter it is very difficult to identify what is used by the datacentre and what is not. The second challenge is figuring out what power is being consumed by individual pieces of IT equipment, and what is being consumed by the ancillary equipment that is necessary to maintain the datacentre environment. The major ancillary equipment power hogs are air-con and UPS - often taking two or three times as much power as the servers, storage and networking equipment.
The advent of two different datacentre efficiency measurements - Power Usage Effectiveness (PUE) and Datacentre Infrastructure Efficiency (DCIE) - means that any organisation wanting to see where it stands against others has a means to apportion power usage between IT equipment and ancillary equipment. Datacentre managers need only to know the total amount of power used by the datacentre, how much goes to IT equipment, and how much to support equipment. Job done!
Unfortunately, this brings in several problems, not least of which is that two datacentre "managers" are now generally involved - the datacentre IT manager, and the datacentre facilities manager. Also, we need to look at the granularity of the power figures that are to be provided. Do you want something high level, at the level of power to a rack? In this case, send in the facilities engineers with power clamps and they will provide you with direct readings that will give a snapshot of what was happening when they clamped a given rack or a chassis. Unfortunately, such a measurement is only a point in time snapshot and will not capture how power usage changes as workload levels change. More effective is a measurement that is more continuous and is based on usage per IT asset. For this you will need specialist skills.
One company that provides such skills is UK-based End2End. Set up by a couple of ex-Hewlett-Packard and IBM veterans, the design and build of datacentres is taken very seriously, and brings together the viewpoints from the facilities and IT sides of the fence to provide the best solution to the customer. For example, when it comes to looking at power within the datacentre, End2End's view is that if you cannot measure it, you cannot do anything about it. Furthermore, if you can measure it, you need to be able to understand it, otherwise it is just more meaningless data.
The majority of those who provide power measurement systems provide direct readout of power utilisation at the power distribution strip. Some of these are evolving to provide highly granular levels of detail - either via optical readouts, or via facilities-based remote information aggregation and display. End2End, however, took a very IT view of what the datacentre manager's real needs are - and built on technology that has been around for a long time.
Power usage is still measured at a granular level at the distribution strip, but is then converted into a standard management information base (MIB) data stream through the use of a standard network management protocol (SNMP) trap. Therefore, the data can be sent through to any SNMP-compliant systems management product - such as IBM Tivoli, CA Unicenter, BMC Patrol, or HP Openview. The scripting languages within these systems can then be utilised to present anything untoward in a manner which makes a datacentre manager more able to deal with it directly - or at the very worst, be able to point a facilities person more directly to where the problem lies - and to understand what the possible ramifications are should the engineer decide to try to swap out the offending electrical component on a live system.
So, by taking a more hybrid approach to the problem, not only does End2End provide you with the data you need to calculate your PUE/DCiE numbers, but also provides you with a means of identifying root cause when it comes to power issues, and in assessing the risk of dealing with the problem.
Having such information to hand can help as architectures change. For example, the move to blade architecture means that power densities become higher. Layering virtualisation on the top of this causes heat problems to become dynamic - a chassis running four virtual images may (or may not) need more cooling than one that is running eight such images, depending on the workloads these images are dealing with and how they have been provisioned. By measuring the power usage of the underlying chassis and individual blades and other components, intelligent decisions can be made as to where a new virtual server should be spun up so as to minimise heat issues, allowing cooling costs to be minimised.
To solve the problem completely in the datacentre requires more than just a good knowledge of how best to use power within the facility and across the IT assets, and End2End brings together tools and systems from a range of different suppliers which, when combined with their own skills, should provide the optimum datacentre design for the customer. Although a small company, End2End has the capability to call on external auxiliary resources to meet variations in its project load. With an impressive list of reference customers including the Thames Valley Police, the NHS and Purple Parking, it is a company worth talking to if you are looking at a new, or a major change to an existing, datacentre. Just make sure that both your IT and facilities managers know and understand who will have the final say on how energy will be managed from there on.
This was first published in October 2008