However, it often seems that there is a conspiracy to counter this. The majority of data centres still work on a raised-floor basis, with the space between the solid and raised floors used for carrying the cooling air to the IT equipment. In this "void" also run the various cables that are required for transferring data between the various pieces of IT equipment, as well as the electricity required to power them. Whereas this may all have been well thought out to start with, as time goes on it is likely that the cabling in this out-of-site environment becomes less structured -- and introduces a few problems.
Power cables should be kept separate from data cables using different trunking and trays to prevent cross talk.
Clive Longbottom, service director, Quocirca,
The first problem is that the sheer volume of cables can impede cooling air flows, stressing the hardware and leading to premature failures. Secondly, the mixing of data and power cables can lead to degradation of the data signal through interference ("crosstalk"), leading to more errors and the need for the whole data centre to retransmit data packets on a regular basis, impacting the overall bandwidth available.
Another side effect that Quocirca has seen in some circumstances is where standard-length power cables are used. For example, if the actual length of cable required is seven feet, a 12-foot cable may be used, just because it is available. What happens is that the spare feet are curled up and clipped with a cable tie to keep them neat under the raised floor.
Unfortunately, any coiled power cable becomes an effective heater, and if it is under the raised floor, it is there continuously heating the very air that the organisation is paying so much to cool the IT equipment within thermal limits. Therefore, more cool air or a colder inlet temperature is required and a downward spiral of cost effectiveness begins.
How should an organisation go about providing an efficient and effective cabling approach?
The issue with raised floors
Firstly, Quocirca firmly advises against using raised floors. Increasing equipment densities are leading to higher floor loadings, and anything that then uses feet at the bottom of the racks will be applying massive effective loads (often in the tons per square inch) to the raised floor. The horizontal movement of a rack through less than an inch can lead to the failure of a floor tile, and to a full rack of equipment tipping over, possibly pulling other equipment over with it. Far better to use the solid floor as the real floor. But what can then be done with all the cabling?
A full-structured cabling approach, where the cables are carried in trays above the IT equipment, gives much better accessibility to the cabling. It also tends to force better standards, as all the cabling is in permanent view. Power cables should be kept separate from data cables using different trunking and trays to prevent cross talk.
The age of just connecting equipment A to distribution board B via an under floor void has to be brought to an end.
Clive Longbottom, service director, Quocirca,
Data and power cabling can be colour-coded to designate, for example, high-speed data cables, SAN fibre cables, WAN cabling, AC and DC power and so on, so that everyone understands the purpose of each cable. Also, each cable end -- whether it is for power or data -- must be fully labelled so that there is no need for tracing cable runs between equipment and plug boards or power distribution boards. This should also be done at intervals along the cable, so that tracing a cable is easier.
Any changes or additions to cables must be carried out under a full-change management system, ensuring that if an engineer moves a cable from one point to another, it is fully logged and all notations on the cable are consistently maintained. For example, taking an existing cable and moving one end to a new piece of equipment and labelling that is not enough -- the two ends need relabeling and the new connections need to be fully logged within the data centre plans and in the system management software.
To give an industry-best practice guide to data centre cabling, the Telecommunications Industry Association has issued a structured cabling standard (TIA-942) which provides data centre designers, builders and managers a set of guidelines that will ensure consistency of effective cabling across a data centre.
The standard takes a zoned cabling model, with the data centre being broken down into and entrance room, a main distribution area (MDA), horizontal distribution area (HDA), zone distribution areas (ZDAs) and equipment distribution areas (EDAs). The MDA is the main distribution bus for data cabling, and should be a one-time installation. The HDA provides the capability for horizontal cabling interconnects to other areas. Each EDA is where the main computing equipment is deployed, and feeds off the MDA via the HDA. ZDAs can be deployed, if required, to allow for frequent reconfiguration of cabling within a data centre -- essentially consisting entirely of patch panels and cabling racks.
By following a well-designed and implemented structured cabling architecture, not only will the data centre be better maintained and its appearance significantly enhanced, but data throughput will be optimised and cooling architectures can be better managed through the use of a structured cooling architecture, using hot and cold aisles along with highly-targeted spot cooling, massively reducing energy needs across the whole data centre.
The age of just connecting equipment A to distribution board B via an under floor void has to be brought to an end. Structured cabling has to be seen as a necessity in today's data centre, not a "nice to have." Even for older data centres, looking to move to a structured cabling system can bring great benefits in traceability and managing adds changes and deletions of equipment in the data centre, as well as energy savings and better cooling.
Clive Longbottom is a service director at UK analyst Quocirca Ltd. and a contributor to SearchVirtualDataCentre.co.UK.