Halloween horrors: When a datacentre migration takes a dark turn

In this guest post, Drew Nielsen, chief trust officer at cloud-based backup provider Druva, shares a scary and hairy tale about how shortfalls in any company’s upkeep and maintenance schedule can result in some nasty surprises when datacentre migration time arrives

Halloween is a time when scary stories are told and frightful fancies are shared. For IT professionals, it is no different. From stories about IT budgets getting hacked to pieces to gross violations of information security, and cautionary tales about how poor planning can lead to data disaster. Those of us who have worked in the technology sector long enough all have stories to chill the server room.

My own story involves a datacentre migration project. The CIO at the time had seen a set of servers fitted with additional blue LEDs, rather than the usual green and red ones. This seemed enough to hypnotise him into replacing a lot of ageing server infrastructure with new-fangled web server appliances. The units were bought, and the project was handed over for us to execute.

Now, some of you may already be getting a creeping sense of dread, based on not being able to specify the machines yourselves. Others of you may think that we were worrying unnecessarily and this is a fine approach to take. However, let’s continue the story…

Sizing up the datacentre migration

The migration involved moving more than 3,000 websites from the old equipment to the new infrastructure. With the amount of hardware required at the time, this was a sizeable undertaking.

Rather like redeveloping an ancient burial ground, we started by looking at the networking to connect up all these new machines. This is where the fear really started to take hold, as there were multiple networking infrastructures in place. Alongside Ethernet, there was Token Ring, HIPPI, ISDN and SPI all in place. Some of the networking and cables made up live networks, while some were cables left in place but unconnected. However, all five networks were not alone.

Like any large building, certain small mammals had seen fit to make their homes in the networking tunnels. With so much of the cabling left in place, these creatures had lived – and more importantly, died – within this cosy, warm environment. And due to the warmth, it had led to more than one sticky situation developing. What had originally been a blank slate for a great migration project rapidly became messy, slimy and convoluted. Finding skulls, dessicated husks and things in the cabling led to an aura of eldritch horror.

Our plucky team persevered and eventually put in place a new network, ready for servers to be racked. With the distinct aroma of dead rat still lingering, it was time to complete the job, but one further, scary surprise was still to come.

When the number of server appliances had been calculated, the right number of machines had been ordered, but this did not match the physical volume of machines would that fit in the racks. Consequently, it became impossible to fit the UPS devices into the racks alongside all the servers. Cue more frustration and an awful lot of curse words being uttered.

Avoiding your own datacentre migration disaster

So what can you learn from this tale of terror? Firstly, keep your datacentre planning up-to-date. This means keeping accurate lists of what you have, what you had, and how old kit was disposed of. Unless you like dealing with a tide of rodent corpses, working with facilities management to keep the whole environment clear is a must too.

Secondly, planned downtime can lead to more problems and these should be considered in your disaster recovery strategy. Assessing your data management processes in these circumstances might not be a top priority, compared to big unexpected errors or ransomware attacks, but a datacentre migration can always throw up unexpected situations.

A datacentre migration should not include points of no return early in the process. Equally, knowing how to get back to a “known good state” can be hugely valuable.

Thirdly, it can be worth looking at how much of this you do need to run yourself. The advent of public cloud and mobile working means that more data than ever is held outside the business. Your data management and disaster recovery strategy will have to evolve to keep up, or it will become a shambling zombie in its own right.

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchCIO

SearchSecurity

SearchNetworking

SearchDataCenter

SearchDataManagement

Close