Multi-cloud management: Making it work for the enterprise

As the hype surrounding multi-cloud infrastructure continues to grow, what do enterprises need to do to make the model work for them?

The private versus public cloud debate became a moot point once enterprises came to realise adopting a hybrid cloud approach, that would allow them to blend public and private cloud resources, would probably be the best fit for them.

Many interpreted the concept – somewhat simplistically – to mean enterprises would run a private environment alongside a single public cloud from Amazon Web Services (AWS) or Microsoft Azure. What has actually happened, though, is that organisations are moving to embrace a “multi-cloud” model instead.

But what constitutes a multi-cloud environment? At its most basic level, it could be an organisation that has chosen to use software as a service (SaaS) systems from a number of suppliers. For instance, Microsoft Office 365, coupled with a customer relationship management (CRM) system from

This is unlikely to be the model that most end up with, though, because what will be required is a highly flexible technology platform that allows workloads to be moved, dependent on defined performance, cost and risk needs.

In many ways, a private cloud is a far more difficult platform to create than a public cloud one. While a public cloud has multiple customers with a variety of disparate workloads (and where resource requirements can be averaged out across them), a private cloud has a single customer with just a few workloads.

It is difficult to take, say, 20 private cloud workloads and create a platform to compare that to a platform that has thousands or tens of thousands of workloads running on it. As such, a private cloud will generally remain a more expensive platform for workloads than a public cloud.

However, a private cloud does provide greater control over what happens at every level – from the hardware through the operating system(s), cloud platform, applications and so on.

Read more about multi-cloud

An organisation may choose to run a specific workload on its private cloud to make the most of this flexibility – but only when the value of running the workload on a high-cost platform outweighs the costs associated with it.

Consider a workload running a batch analysis of a big data problem. The major time for this to run is at the end of the month, with minor runs at the end of every week. The organisation may choose to run the month-end workload on the private cloud, but needs to ensure there are sufficient resources for this to take place. 

Therefore, it may choose to move less business critical or lower-worth workloads from the private cloud to more generic public cloud platforms while the high-value workload runs.

The incremental weekly runs, however, may be offloaded to the public cloud, leaving the private cloud environments available to run higher priority workloads.

But what public cloud should the offloads use? A single agreement with one cloud provider puts all eggs in one basket. Using two or more different public clouds allows for better contract negotiating and for higher availability of the overall platform.

Moving workloads across the multi-cloud environment

To manage this, tools must be in place to enable the easy and effective movement of workloads across the multi-cloud environment. This means containerisation is a must – whether this be Docker, rkt, LXC (or Canonical LXD). By ensuring workloads are essentially small, self-contained images, such mobility becomes possible.

To turn that “possible” it into a “probable”, things need to be taken further. The various contextual needs of a workload need to be understood, ensuring that performance is not compromised by, for example, a transactional workload moving to a public cloud while its main data remains on the private cloud. 

The latencies involved with such data traffic having to traverse the wide area network (WAN) will probably outweigh any benefits gained. 

This prompts a need for powerful orchestration tools to be used that can provision and manage the movement of workloads across a hybrid, multi-cloud environment

Tools such as Kubernetes, Electric Cloud, Canonical Juju and HashiCorp offer strong and growing capabilities. DevOps provisioning tools, including Chef, Ansible and SaltStack also offer more basic capabilities that can be built on.

A full, multi-cloud management environment is still some way off, but this should not stop an organisation from putting in place the foundation to embrace multi-cloud

With highly defined workloads and adequate base-level orchestration, the beginnings can be put in place, with cyclic workloads being moved across different clouds as required, albeit in a semi-automated manner.

A change of charge

The final part of multi-cloud is around contract management. As cloud matures, there will likely be a move to some functions being charged on a per use basis (as AWS Lambda is charged now).

A highly dynamic complex multi-cloud platform cannot be managed manually at the cost level; there will be a requirement for both technical and economic contracts to be negotiated and managed on the fly.

This remains an area where maturity is extremely low: even the major software licence cycle management companies such as Flexera and Snow are still some way off being able to provide such services. However, expect to see major advances through 2017 and 2018 on this front.

Overall, an organisation’s future is predicated on openness and the capability to move workloads across different technical platforms. There will be a need to manage and operate a mix of infrastructure, platform and software as a service models across private and multiple public clouds.

Failure to create a strategy that enables this will be bad for your organisation’s business.

Next Steps

Channel partners explain how they help clients manage multi-cloud costs

Accenture offers tips for tackling cloud management challenges

Read more on Infrastructure-as-a-Service (IaaS)