Managing data centres: People and hardware don't mix

Column

Managing data centres: People and hardware don't mix

Clive Longbottom

As the history of the data centre has unfolded, we have seen the dynamics of how these changes have worked. In the first place, the mainframe-centric data centre was a hallowed place, only to be entered by the neophytes of the new technology world. Air quality had to be maintained, temperatures were to be very closely monitored and vibrations minimised to protect the fragile components of this very expensive piece of kit. "Users" -- including coders, testers and sysadmins, as well as those actually needing the capabilities of the mainframe -- accessed the computer from afar through the 3270 "green screen" terminal.

Quocirca believes that the vast majority of data centres could be moved to "lights-out" operation for over 95% of the time.

 

Clive Longbottom, service director at Quocirca,

Then came the era of the midi computer, with the growth of companies such as DEC and Bull. These new computer devices needed a higher degree of management, and parts of the data centre suddenly found themselves packed with people coming in to apply operating system patches, to oversee reboots and to add additional pieces of hardware to the systems.

The final part of this evolution was the introduction of the PC-based server; the explosion from companies such as Compaq, HP, IBM and the move to standard high volume (SHV) Intel-architected servers that cost a few thousand dollars each. The data centre moved from hosting a few items of large kit to the need to provide an environment where hundreds, possibly thousands, of smaller devices had to be housed. The explosion of compute devices also meant that extra kit had to be provided to deal with the power and data cabling that was required; this also led to the need for distributed cooling, rather than spot cooling.

The data centre became more like Grand Central Station, with facilities management, telecoms, network, development, testing and other staff wandering in and out as they saw fit. Trying to control this has become a major issue, as each person is certain that they need to be next to the hardware to carry out their particular task. The impact on corporate security is high, but there are also other impacts that need to be kept in mind.

Keep away from the hardware
The data centre for people needs to be fully lit and kept at a temperature that is conducive to work. The data centre for hardware, however, can be in complete darkness and run at a hardware-appropriate temperature. Basically, people and data centres don't mix. Factor in the fact that the vast majority of problems in the compute world are caused by human error, and the more you can keep people away from the hardware, the better.

Can it be done? Quocirca believes that the vast majority of data centres could be moved to "lights-out" operation for over 95% of the time, and the remaining 5% should have minimal impact on how the data centre is run.

Systems management should be carried out as it was back in the mainframe era; from afar. Today's systems management packages can manage both physical and logical platforms from the one pane of glass, and they will either be agentless or will be able to push agents onto the required assets in a remote manner.

Applications should be able to be provisioned and deprovisioned without the need to go into the data centre. The use of advanced sensors and monitoring systems should be able to pick up on issues as they happen, and automated systems should be able to prevent these issues from becoming problems.

In a virtualised environment, for example, quickly recognising the appearance of a hot spot in the virtualised estate can lead to the workloads dependent on the physical asset being moved to other hardware through virtual to virtual porting. The affected physical resource is then turned off, negating the hot spot issue.

This will mean, however, that there is now a non-working asset in the data centre. A person will now need to go into the data centre to deal with the problem. This should be the exception rather than the rule, though, and the engineer should be in there just long enough to replace the affected system.

These days, it's not worth trying to fix the item in situ. Just pull it out, replace it and deal with the affected item outside of the data centre. Low intensity and cold lights only need to be on while the engineer is in the data centre. Indeed, the area where the affected item is located is the only area that needs lighting, just so that the engineer can carry out the task.

The data centre became more like Grand Central Station.

 

Clive Longbottom, service director at Quocirca,

The same approach should be there for those responsible for patching and updates: the data centre is out of bounds unless there is a solid need to go in there, and the person going in will have to deal with the 30°C/86°F temperature.

Networking in a virtual data centre should be virtualised. New links can be set up in the same manner as new applications or functions can be provisioned: safely and from outside, using automated processes. Facilities people should ensure that a zoned system is used, so that they don't have to perform day-to-day work in the main data centre areas and that cooling equipment (where used) and power distribution equipment are all housed in accessible areas outside of the main areas.

A well-managed, lights-out virtualised data centre not only leads to a better managed data centre, but also saves money through not requiring lighting and being able to run the data centre at its optimum temperature. It also drives the requirement for fully automated processes and provides extra security through minimising the number of people who are allowed anywhere near the hardware assets.

Clive Longbottom is a service director at UK analyst Quocirca Ltd. and a contributor to SearchVirtualDataCentre.co.UK.


Email Alerts

Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
 

COMMENTS powered by Disqus  //  Commenting policy