Why databases must follow the applications to the Cloud

| No TrackBacks
| More

As increasing numbers of applications move to the Cloud architects need to think about the implications for system performance. A key benefit of the Cloud is access to resources on demand so that as applications need more processor, memory or disk space, these can be provisioned on demand.

 

But all of this is irrelevant if the applications and the databases on which they rely are too far apart. This creates a significant challenge for a lot of companies and can be compounded by the Cloud model that they choose.

 

As the application and the database get further apart the latency increases. Typically, those organisations whose databases and applications are in the same Metropolitan Area Network (MAN) should be able to cope with the latency but beyond this the delay causes problems. With some types of data, this is not an issue but with databases where data has to be properly committed it can cause transactions to fail and has the potential to cause data corruption. This problem is particularly acute in high transactional environments.

 

If might seem simple, therefore, to just co-locate the databases and the application but the two key reasons that prevent companies moving their databases to the Cloud are compliance and data protection. Neither of these should be underestimated especially if you are in the financial services industry.

 

Even if you can resolve the legal challenges around these two issues, it is not plain sailing. You also need to think about business continuity and backups. Some of this can be solved using snapshots and then copying the data back to the corporate datacentre but this still leaves open the risk of incomplete and lost transactions if you have to rely on the snapshots for recovery.

 

Another solution is to use Continuous Data Protection. This ensures that the data is copied, at a block level, as soon as it is changed. However, the ability of CDP to work synchronously is still limited by distance and anything outside of the range of a MAN is likely to have to work asynchronously.

 

One area where this problem can become exacerbated is when your choice of Cloud model revolves around the idea of Cloudbursting. While the different Cloud vendors differ in the implementation the principle here is that you use the Cloud as an extensible set of resources. When you need additional capacity you move an application into the Cloud and when the demand drops, you bring it back to the datacentre.

 

The most common way of implementing Cloudbursting is to have one copy of the application held locally and another in the Cloud. When you need to switch you simply turn on the Cloud version and redirect users to the location. This same principle can be used for the database that is associated with the application. Two copies, one local and one in the Cloud.

 

The two copies are kept synchronised in a master/slave configuration. There will be issues as to latency based on distance but in this example it can be managed. When you switch the application, you pause the master database, force any updates across to the slave and then reverse the relationship. Any operations that occur during this process are simply written to a log and then applied after the master/slave relationship is re-established.

 

As the need for the remote resources ends, you switch the application back to your local datacentre and repeat the switch with the databases.

 

All of this, however, is little more than a temporary fix. Over time and with heavily used applications the delays in synchronising the two databases can begin to impact performance. What is really needed is a better way to architect the applications so that they can manage remote databases. Unfortunately, none of the key database vendors have developer guidance around the best way to manage this.

 

So before you rush into the idea of just pushing your applications into the Cloud but retaining your databases, think about latency and whether you need to redesign your application for better performance using remote databases.

Archived comments:

Cliff Saran | June 11, 2010 11:54 AM

Ian, perhaps there's a case for caching query results nearer to the apps?

Ian Murphy replied to comment from Cliff Saran | June 14, 2010 2:11 PM

Cliff, just caching the data near to the apps isn't enough. Cached data needs to be refreshed to keep it current and then written to disk in order to be saved properly. Bandwidth is still the issue in order to move that amount of data from local to the Cloud.

There is also the question as to which apps are you refering. The core corporate systems such as CRM, ERP, large databases and mail can sit alongside the data in the Cloud provided you solve the problems of security, backup and disaster recovery.

When we move to applications such as BI, however, you have to monitor the impact of greater use of desktop tools pulling large amounts of data down to the local device. If you are holding core data in the Cloud there is a good case for a replicated local copy which is part of your solution to backup and DR as well as providing a read-only copy for BI users.

At the moment, however, when you ask the question of how to architecturally design for this scenario, most vendors effectively shrug their shoulders and say that they are looking at the problem.

No TrackBacks

TrackBack URL: http://www.computerweekly.com/cgi-bin/mt-tb.cgi/44615

Disclaimer

The opinions expressed herein are the personal opinions of the authors and do not represent either of our employer's views in any way. All postings and code samples are provided 'AS IS' with no warranties, and confer no rights.

About this Entry

This page contains a single entry by Michael Pincher published on September 21, 2011 11:10 AM.

Business Intelligence - Corporate Oxymoron or Capability was the previous entry in this blog.

Find recent content on the main index or look in the archives to find all content.

Categories

Archives

Ian White's Twitterings