buchachon - Fotolia

The multi-cloud myth: Why workload portability is still a pipe dream

While IT providers are fond of suggesting moving workloads between clouds is as easy as dragging and dropping apps between environments, the reality can be far more complex

Multi-cloud represents a long overdue recognition from the IT industry that organisations will provision IT from more than one cloud provider, and on-premise IT is unlikely to disappear any time soon.

Problems with the multi-cloud model start to surface when companies realise that moving workloads from one cloud to another is not as easy as the providers would have people think.

Defining multi-cloud

The multi-cloud term started to enter IT parlance in the past year or two, and basically reflects the reality that many organisations already use one or more Software as a service (SaaS)-based applications for things like HR or email, plus a platform as a service (PaaS) for application development, and probably at least one infrastructure as a service (IaaS) platform for operating some workloads on virtual machines.

“We’re increasingly seeing more customers using multiple public clouds, because some of the cloud providers have some degree of specialisation with regards to workloads,” says Matthew Cheung, research director at Gartner’s Technology and Service Provider Research group.

These specialisations include key enterprise applications such as SQL Server (in the case of Microsoft’s Azure) or the ability to link to artificial intelligence and data analytics services in the case of Google Cloud. Amazon Web Services (AWS) has a plethora of functions and services that are unique to its platform, and is adding to these with the release of hundreds of new services every year.

A mix-and-match approach to cloud

Many organisations will have started using IaaS during the early stages of their move to the cloud, adopting a “lift and shift” approach by moving existing workloads off-premise. This tends to see servers converted to virtual machines but configured together so they replicate the same architecture in the cloud as seen in on-premise environments.

Whether through accident or design, many organisations will find they wind up with workloads on more than one IaaS platform. This could be for data sovereignty, anti-lock-in reasons, or for redundancy purposes.

This is no problem, say suppliers touting hybrid cloud products and services, because workloads can be moved with ease from one cloud to another, if you find one is offering a better deal, or introduces new features or services their organisation requires.

But just how easy is it to relocate workloads, should your organisation decide the cloud they are on is not the best home for them? Suppliers pitching the hybrid cloud scenario like to make it sound like you can just drag and drop an application from one cloud onto another, but in reality there are numerous obstacles to overcome.

Busting the multi-cloud myth

For one thing, server instances may vary in characteristics between cloud providers, and while there are open formats for packaging up and transferring virtual machine images, these are rarely used in practice. In addition, traditional three-tier applications typically need access to a database deployed on a separate server cluster, further complicating migration.

All of this means virtual machines are not the ideal vehicle to use if you want the freedom to shift workloads from cloud to cloud. An alternative option is containers, which have taken off since Docker introduced its namesake platform, offering developers an easy way to encapsulate application code and distribute it for execution either on-premise or in the cloud.

Containers have some advantages as far as portability goes; they are much smaller in size than virtual machines (often just a few tens of megabytes in size) whereas a virtual machine contains an entire operating system, as well as the application, and totals several gigabytes as a result.

Containers are, effectively, a Linux technology (save for the recent introduction of Windows containers), and are largely employed for so-called cloud-native workloads running on a public cloud, rather than traditional enterprise applications operating on company-owned systems today.

Read more about cloud

  • Huge improvements in cloud technology are opening up markets and solutions that previously didn’t exist. But complacent suppliers often don’t offer these innovations to long-established customers.
  • The OpenStack Foundation opens up about the work it is doing to address the open source cloud platform's identity crisis and integration issues, as part of its continuing pursuit to improve the technology's enterprise-readiness.

Containers are also a less mature technology than virtual machines, and so the ecosystem is evolving rapidly, with competing solutions for security, high availability, and key resources such as support for persistent storage.

And even here, there is little prospect of being able to simply pick up your workload from one cloud and migrate it across to another, according to Cheung, because workloads do not tend to be neatly confined inside a virtual machine or container.

Instead, they tend to have dependencies on other functions and services, and as these differ between cloud platforms, it becomes difficult to simply shift your workload to a different cloud unless it is very simple and self-contained.

“Movement of workloads between different public clouds is really difficult today,” he adds.

Workarounds for cloud portability

There is one way of getting around this, however, and that is by using an application environment, such as a PaaS, that is not tied to any particular cloud platform. A PaaS itself often provides many of the services required by an application, and can thus aid mobility if the entire environment can be moved from one cloud to another.

“PaaS suppliers are actually quite IaaS agnostic,” says Cheung. “So to make an analogy, that means you can lift up the whole house and move the whole house to another place, and that would not require a lot of effort as long as all the underlying infrastructure is there,” he says.

A notable example of this is Red Hat’s OpenShift application platform, which enables users to build and deploy applications using Docker containers. This is available in a version that can be deployed on-premise, as well as on Microsoft Azure, AWS and the Google Cloud Platform. This should, in theory, allow a customer to move application code between any of these platforms.

The same caveats over dependencies still apply here, though. If your applications end up relying on functions or services that are native to a particular cloud platform, it may be difficult to move that application elsewhere. And expecting developers not to use a handy function or service because it might lead to a lock-in situation is probably a hopeless cause.

The promise of serverless computing

Looking beyond PaaS, organisations could turn to the emerging trend of so-called serverless computing, whereby applications are built around specific pay-per-use functions, which gives rise to the alternative name of “functions as a service”. A good example is Amazon’s AWS Lambda.

Almost all of the serverless platforms support Python, so coding applications in that language should allow the code to be moved easily from one to another. Again, problems can arise when different clouds support different functions, and unless your code is very generic and avoids linking to any cloud-specific functions or services, it will not be a seamless move.

Another barrier to achieving the promise of cloud portability is the actual data. It may not be practical to move some kinds of information off-premise because of compliance or regulatory reasons.

Alternatively, the choice of cloud platform you can move workloads to may be constrained for similar reasons, because of where their datacentres are located or because only a limited number of providers may meet the certification level required to run particular workloads.

On top of that, the volume of data involved is also likely to be a factor. While users today enjoy relatively fast internet connections, transferring many gigabytes of data via the internet could take days to complete.

Migrating data from your premises to the cloud can be done using a disk-based appliance that is simply shipped to the cloud provider’s data centre, such as the AWS Snowball solution.

But moving data from one cloud platform to another is more problematic; different cloud providers use different APIs and standards for data storage. The pricing models used by cloud providers could also mean that a customer may find it costly to extract their data and transfer it.

The management of multi-cloud

Compounding these issues, cross-platform cloud management tools are still relatively immature, with Gartner describing them as an “emerging and highly fragmented market” in a recent report.

Cloud and infrastructure suppliers have their own management tools, but these are largely focused on tight integration with their own software stack, leaving it to third-party suppliers to offer some form of unified layer that can support multiple clouds.

This situation leaves IT departments with the choice of having to master an array of “best of breed” management tools to oversee their multi-cloud environment, or using one cross-platform management tool that may have a limited range of functionality.

The take-away message is that mobility between cloud platforms is still a non-trivial issue, although technologies such as containers are a step in that direction. Enterprises should therefore plan carefully when mapping out their cloud strategy, as switching cloud providers may be difficult or costly if you should decide you would be better served by a different cloud in future.

Read more on Infrastructure-as-a-Service (IaaS)

CIO
Security
Networking
Data Center
Data Management
Close