News Stay informed about the latest enterprise technology news and product updates.

Data center power and cooling revamp success story at CRIS

Centre for Railway Information Systems (CRIS) transforms its malfunctioning data center power and cooling setup into a tier 3 data center in two years.

The Centre for Railway Information Systems (CRIS) has come a long way since its inception in 1986, when the Ministry of Railways established CRIS as its umbrella organization for all IT related activities of the Indian railways. Today, CRIS is a project-oriented organization engaged in the development of major computer systems for the railways. With over 200 employees nationwide, a low-risk delivery model is being utilized to accelerate schedules with a high degree of time- and cost-predictability.

Tips for data center power savings
On the basis of the experience at CRIS, Sahu offers a few tips for data center power savings.

1. Switch off unused servers.

2. Avoid leakages and holes. If tiles cannot be changed on an immediate basis, fill them up with thermocol.

3. Align servers properly, keeping the airflow in mind.

 But, things didn't always go so smoothly at CRIS. A few years ago, CRIS ran into trouble because its data center was on the verge of a breakdown due to dysfunctional power and cooling arrangements. But the organization learned from this crisis, and what followed was a total transformation at CRIS, as it turned its data center into a tier 3 data center. The data center is situated at Chanakyapuri in Delhi.

The organization's old data center faced several problems. The lack of options on any front, led the IT team at CRIS to start on a journey of transformation. Surekha Sahu, the chief manager for electrical at CRIS recalls the niggling issues, "Project managers would just come up to us, and give us their hardware requirements. Providing sufficient power to all their hardware was becoming a difficult task." There were major issues while providing power, since the fuse would just go off whenever there was any switching-on or switching-off. Also, since the old UPS system was included in the facility load, it affected the data center's functioning if people plugged in a tea machine or printer.

CRIS' data center cooling infrastructure was also failing, and it was not able to provide the required temperature. Due to this, servers were automatically tripping and shutting down. The required data center temperature was 22 degree (Celsius), but sometimes it used to touch 35 degree (Celsius). Even the air handling unit (AHU) failed after the data center cooling load kept increasing. "At that point of time, we had not thought of changing the entire data center's cooling setup. The change came about in two years," says Sahu. Before the data center power and cooling implementation took place, the total energy consumption was 11,00,000 units; it has now come down to 58,000 units.

The final plunge was taken with very small changes on the data center's power and cooling fronts, which had a major impact. First of all, CRIS' team segregated facility load from the data center load in the power setup. Earlier, power connections were made from inside the rack, leading to a jumbled mess of several cables. Now, power connection is taken from under the floor and plugged to the racks. Also, the cables have been numbered properly, tracing their path to each rack. CRIS' data center has around 56 server racks with around 400 servers. Old lights have been replaced with energy-efficient tubelights.

Coming to the data center cooling front, earlier, the situation had become nearly hopeless. Sahu recollects an incident. "One night, we had to take fans from the cabins and install them in our data center when the AHU failed. Today, we know that we won't have to face any such situations, ever again." Also, earlier, CRIS had air curtains wherein the cold air was blocked and stopped from going out; these were installed at the data center's door. Today, a DX-type precision AC from Emerson has been implemented. (Before implementing hot/cold aisle containment, CRIS did not have a proper arrangement of servers; people kept servers wherever they could find a place.)

One of the problems faced during CRIS' data center power and cooling implementation was that racks couldn't be shut down, since the organization had many online applications. Taking this into consideration, CRIS' team first provided data center cooling and arranged cabling simultaneously. Later, racks were arranged in a hot/cold aisle containment setup. Partitions were broken down in the working areas, and cables were laid. At present, the normal temperature for data center cooling is kept at 26 degrees.

Another problem was that the CRIS data center is very long, but its width is quite narrow (4 meters). As a result, racks were not fitting in a double row, and there were alignment issues. Today, the data center has dual redundancy in the UPS for constant data center power supply. Two parallel paths go up to the rack level.

There are certain pre-requisites which have to be achieved to transform a data center into a tier 3 data center. First of all, the data center's availability should be 99.99%. Then, it should have multiple paths for data center power and cooling. Lastly, it should be concurrently maintainable without any failure. All these pre-requisites have been successfully achieved by CRIS. For the near future, fire protection improvement is on CRIS' agenda.
 

CW+

Features

Enjoy the benefits of CW+ membership, learn more and join.

Read more on Datacentre cooling infrastructure

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchCIO

SearchSecurity

SearchNetworking

SearchDataCenter

SearchDataManagement

Close