When it comes to moving your data centre's applications to a cloud hosting service, many concerns, including costs and ROI benefits, have been well covered. But the big question on everyone's minds is whether people feel comfortable putting business-critical data and applications "out there" in the cloud.
The influx of new entrants into the market means that standards are likely to vary greatly, so organisations considering remote third-party hosting services should conduct a thorough review of potential providers' infrastructure and processes. This is a key step if you are concerned about losing control of your data.
Cloud providers should have solid data centre infrastructures
Among the first concerns to put on the checklist are the physical or "housekeeping" elements of the service-hosting location(s).
A good provider will host its services from sophisticated Tier 3 or Tier 4 data centres. You should expect these to be manned 24/7 by security personnel and to be designed with state-of-the-art fire suppression systems, as well as multiple power feeds from separate sources.
Likewise, sites should safeguard against service disruption due to connectivity problems. They should retain multiple communications links from diverse providers, complete with automatic failover facilities. Customers themselves should consider
Cloud computing security concerns
The physical aspects of a provider's services should go some way to convince prospective customers that the provider is serious about their business. But what about the thorny issue of security in the cloud? How is data kept safe and which systems will ensure that information won't fall into the wrong hands?
Each customer's data should be stored on separate physical or virtual servers. Industry-recognised firewalls fence off each customer's data -- the setup is designed to protect against malicious attacks and ensure that one customer's data is not mixed up with another's.
Importantly, security should be looked after by a certified professional who understands the responsibilities of securing an infrastructure -- there is no point putting a great lock on your front door then leaving it on the latch.
To avoid errors, providers should demonstrate tight policies on how they process customer requests for allocating new end users to applications. This detail gives an indication of how rigorous the provider is.
And while cloud providers are tasked with looking after the applications and the delivery environment, customers should be allowed to specify whether to give a provider's staff the right to access their data. Customers should be able to reserve the right for only their own employees to cover maintenance or support tasks that require working directly with their data.
Disaster recovery in the cloud
Disaster recovery and business continuity facilities are obviously a key concern. Customers' data should be backed up at least daily, with the flexibility to do it more frequently if an individual organisation requires it.
If backups are stored at remote sites, they should be encrypted in transit and stored in an encrypted state at the end location. It is important to check service-level agreements to verify how quickly the supplier will perform data restores.
Similarly, it is important to check the contingency processes in the event that the primary hosting site goes down due to a major disaster. Does the provider have a well-formulated plan and the necessary systems to quickly redeploy customers' environments and data to a secondary location?
Like all new products and services, early adopters should proceed cautiously before moving their applications into the cloud.
But as the technology becomes more familiar, confidence will increase. The quality of infrastructure and processes will become standardized, the mystery around cloud computing will fade, and the hesitancy to surrender control of data and applications will gradually disappear.
Graham McLean is the managing director of CI-Net and a contributor to
This was first published in October 2009