By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
After I made the decision to hire rack space from a commercial provider, the first company I went with turned out to be a misstep in a number of ways. I have named that provider "Colocation-R-US" and from that bad experience, I've developed a checklist to share with others so they don't make the same colocation blunders when choosing a data centre provider.
Get key people involved in the initial site visit
When you perform a colocation data centre site visit, make sure the meeting contains the sales rep - who can tell you about different packages and their relative costs - and a technical staff member, who may pick up on anything unusual about your colocation requirements.
I discussed with my new colocation data centre provider that there may be times when I need less power than at other times of the year. For example, in the next two quarters I will likely be running at least three storage vendor arrays (Dell, EMC and NetApp), and because I will be configuring replication, I will have at least two arrays per vendor. I'll have six arrays in a single 42U rack, with only four 2U servers and a gigabit switch utilising them - with no production load. Once this project is done, many of these arrays could be powered down or even returned to the vendor from which they are on loan. Once a different project I'm working on, my storage resource management (SRM) project, is done I hope to go back to the standard 16 AMP allocation.
In my last colocation data centre move, I measured various pieces of equipment I had, when idle and under no load to work, so I could properly calculate my minimum AMP requirements. I hoped to save money in colocation data centre fees, in the second half of the year, by putting on the maximum amount of hardware I could power without exceeding the basic 16 AMP allocation to a 42U rack.
My new colocation data centre provider was very receptive to this proposition. The table below illustrates my calculations. I deliberately don't have redundant power supplies on my servers. Because I'm not in a production environment, I find the cost of buying a second PDU on top of the "factory specification" and then powering it unnecessary.
I didn't isolate the AMP rating of each of the EMC arrays but calculated it by powering off other equipment to work out their load.
|1x 2U server (HP Proliant 385)||1.1 AMPs|
|1x 2U NetApp FAS2020||1.25 AMPs|
|1x 14U EMC NS-120||5.0 AMPs|
|1x 7U EMC NS-20||5.0 AMPs|
|4x 2U servers together with 1x EMC NS-20 and NS-120 together with networking*||15.4 AMPs|
*In the case of the NS-120, it comes with 3 disk shelves. I do not need this quantity of storage, so when I racked up the equipment (although I racked up the disks shelves) I only cabled up and provided power to one shelf. I was able to work out that all of my equipment (4x 2U servers, 2 x EMC arrays, 2 x NetApp arrays) would need around 18 AMPs of power, and if Dell supplied with me with arrays for my SRM project, this would push me over the 20 AMP limit into 22 AMPs.
The importance of Internet bandwidth and security
In the "Colocation-R-US" environment, there was scant regard for Internet bandwidth and security. I was free to download to my heart's content, and I was even allowed to operate without a commercial-grade firewall (although I took the precaution of using a firewall inside a virtual appliance). This meant I could frequently download large amounts of big .ISO files - for example, when VMware released a new version of its vSphere platform. I was also free to enable any system I liked for remote access and I was given a generous bundle of 20 IP addresses as part of my 16 AMP package.
It's worth saying that the number of IP addresses available went up as I upgraded from an 8 AMP half-height rack to 16 AMP full-height rack package. In time, though, I heard from other users about IP conflicts. Because there was no enforcement of firewall usage in place, it was entirely possible for one user to select an IP scheme that overlapped with another -– and without proper bandwidth allocation policies – and occasionally I would see slow responses to my remote access to either a VMware View virtual desktop or Citrix XenApp desktop. Perhaps someone was downloading very large files on the same pipe?
When 8 IP addresses aren't 8 IP usable addresses
In some cases, you might receive a certain number of IP addresses and, depending on the way a provider counts these; you might be left with fewer IP addresses than you anticipated.
I personally maintain a Citrix XenApp server, Secure Shell (SSH), Terminal Services and VPN connection to my lab. These services are all provided as virtual machines and are set to reside on different ESX hosts and on different storage layers. The theory is if a virtual machine physical server or storage unit is dead, I should still be able to remote in. It also covers me from the perspective of different protocols being blocked or open at different locations around the world. I can gain access via TCP port numbers 22, 1494, 3389, and 443. I want to avoid having to book time at the colocation data centre or having to call some remote hands.
You might think that if I have four separate IPs for four separate systems, then out of eight IP addresses four would be free. You'd be wrong. If the provider gives you eight IP addresses, they will be using classless inter-domain routing notation to subnet the IP space. This provides a range, for example, from w.x.y.64 to w.x.y.71. Out of this range of eight IP addresses, the network address would be w.x.y.64, the default gateway could be x.y.z.65 and the w.x.y.71 IP address would be the subnets broadcast address. This would leave just w.x.y.66 to w.x.y.70 available for your use – in other words just four IP addresses. This is pretty much standard practice in colocation data centres where proper management of the IP space is a matter of course.
If in doubt, go with your gut instinct
There's one final piece of advice I would give anyone considering cheap colocation services: Do a site visit and then go with your gut instinct. At one colocation provider I visited, I was very impressed by the quality. However, I was worried about the company's almost empty data centre hall. They were very keen to have my business, and as the small-fry owner of a single rack, it made me wonder about their financial state, given how desperate they were for me to sign up.
The company was a new business that had taken on the facility from another provider that had gone out of business. Although it is still operating, I was concerned that I might sign up for a year, only to find that I would later need to move out if the company went under. I decided not to go with this company, based on my gut instinct, something I should've taken into consideration when I chose my first provider - "Colocation-R-US".
Although I have outlined some hard technical reasons for my decision criteria, there is a softer set of criteria as well. Less significant features, like the level of security surrounding the facility. Plus, you are provided, by the provider, with a security key rather than having to look for a member of staff to swipe you in and out of the location.
Replicate data across several locations
Additionally, given my interest in disaster recovery issues and VMware SRM, I was pleased to hear that the colocation data centre provider I selected had dark-fibre infrastructure to allow customers to replicate data around their various locations in the UK.
All of these softer aspects are important to me after my "Colocation-R-US" experiences, as they reassure me that I am working with a professional operation. So when someone visits my new location, next, I won't feel embarrassed and find myself apologising about the gaffer tape surrounding power strips that I had previously used with "Colocation-R-US".
To find out more common mistakes made when choosing a colocation data centre provider, including why you should visit potential data centre colocation sites and to beware of half-rack packages see parts one and two of this series.
MIKE LAVERICK'S BIO:
Mike Laverick is a professional instructor with 15 years of experience with technologies such as Novell, Windows and Citrix and has been involved with the VMware community since 2003. Laverick is a VMware forum moderator and member of the London VMware User Group Steering Committee. In addition to teaching, Laverick is the owner and author of the virtualisation website and blog RTFM Education, where he publishes free guides and utilities aimed at VMware ESX/VirtualCenter users. In 2009, Laverick received the VMware vExpert award and helped found the Irish and Scottish user groups. Laverick has had books published on VMware Virtual Infrastructure 3, VMware vSphere 4 and VMware Site Recovery Manager.