How to move data and applications in the cloud

Feature

How to move data and applications in the cloud

While there are many reasons why moving to a colocation datacentre facility is good for the business, the process is still fraught with risk.

Previously, we looked at how a “plan B” was required for dealing with the need to move away from a failed relationship with a datacentre or cloud provider – or indeed the failure of the provider itself. Remember 2e2?

cloud.jpg

Our advice was that IT executives should ensure that the contract between their organisation and the cloud provider is such that the organisation is the stated owner of the data. However, this only takes things so far – it still leaves the issue of what to do with the data after.

Ultimately, the response is “it depends”. The first issue is around the application that created the data – can you still gain access to the same application elsewhere?

If the existing agreement was for infrastructure as a service (IaaS) or platform as a service (PaaS), then your organisation will have owned the applications anyway, so reinstalling them on a different cloud platform should not be overly problematic. 

The difficulties of moving data in the cloud

In the case of software as a service (SaaS), however, there could be bigger problems. If the service being offered was based on a standard application – for example, SugarCRM or OpenERP – it should be possible to find another service provider hosting the same application. There may be differences in the implementation, but all that should be required is an extract/transformation/load (ETL) action to make sure the data fits the schema of the implementation the new service provider has in place.

IT executives should remember that any modifications they were allowed to make to the application by the previous provider (such as skinning the application with a logo or the addition of any extra functions) will need to be carried out again with the new provider. 

In many cases, it will not be possible to pull any of the changes from the previous provider, so re-implementing these will be the hardest part of the transition. What it does mean is that any changes that are carried out, even in a SaaS environment, must be documented and stored outside of the SaaS environment – a full change log is necessary so that the changes can be re-implemented if a change of provider is needed.

The real problems come when a business is moving from a provider which has proprietary software in place. This may be a provider which has so heavily modified an open source application as to essentially make it a new application. Or, it could be a SaaS provider which owns the application and does not allow it to be offered by any other cloud provider on their own platform, such as Salesforce.com.

However, whereas Salesforce.com is unlikely to hit the buffers any time soon, some of the smaller dedicated SaaS providers are bound to fail, just through the law of averages.

Quocirca recommends that the original choice of SaaS provider takes this risk into account. If you haven’t already adopted software as a service, then ensure that the risk of the provider going bust is assessed, and what effort would be required to take data from the provider's system and get it into a form that could be used by another system over a short period of time.

Planning for recovery of SaaS data

Those businesses which have already made the move to a SaaS provider should make sure that the plan B for getting to a known recovery point objective (RPO) and to a known recovery time objective (RTO) is in place.

The first thing needed is to identify what the target application would be. Quocirca advises that this should be either a widely adopted application among SaaS providers, or an application from a very large and hopefully more financially secure proprietary SaaS provider.

Next is the need to identify the schemas used by both systems. Matching field names and types is necessary here to make sure that fidelity of information is maintained when the data is moved across. This will also define the ETL activity that will have to be carried out.

Then, there is the necessity of testing. It cannot be left to chance and hope that such an activity will work. You will need to carry out a test by taking data from the existing environment and moving it to the new environment. This does not have to be based on a permanent contract with the second provider – it is just a test to make sure it works. 

Based on the test being successful, you can create a full, formalised plan as to what your organisation needs to do should the worst come about. This should also include indications of how long such activities are expected to take – and the plans around how the business will continue to operate during this downtime. This may well involve falling back on manual processes – and any data that is gathered during these manual processes will need to be input into the new system as it comes on line.

The last area that should be covered by the contract is that the old service provider must securely wipe your organisation’s data from their systems – something that is more often than not overlooked.

The old service provider must securely wipe your organisation’s data from their systems – something that is often overlooked

Data movement will become easier as cloud matures

Hopefully, as time progresses and cloud standards bed down, it will be possible to move applications and data about on standardised cloud platforms, such as OpenStack, and the whole activity should be a great deal more seamless and easy.

There are also commercial systems coming to market that may make life easier. For example, Vision Solutions’ Double-Take Move can be used to migrate data from one cloud provider to another – even where the source provider refuses to cooperate. Quocirca expects other similar services to come to market over time – but for many, a solid plan for dealing with a migration from the ground up will be needed.

Cloud providers are no different to any other commercial entity. There will be failures along the way, and this is no reflection on cloud as a model for implementing an IT platform. The problem is that any failure of a cloud provider will hit more organisations, as they are, by definition, multi-tenanted platforms.

IT must be prepared with a strategy to minimise the impact of the failure of their cloud or datacentre provider – whether it be due to a breakdown in relationship or the complete failure of the cloud provider.


Clive Longbottom is a service director at UK analyst Quocirca Ltd

 


Email Alerts

Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

This was first published in April 2013

 

COMMENTS powered by Disqus  //  Commenting policy