Fotolia

The pros and cons of cloud bursting

The much-vaunted benefits of cloud computing can be overshadowed by portability and interoperability issues

This article can also be found in the Premium Editorial Download: Computer Weekly: Dell Technologies aims for an intelligent, connected future

It’s fun to think about the possibilities of bursting and brokering, but countless barriers stand in the way of enterprise customers. Dynamic porting of workloads is an interesting concept, but not yet an agenda item.

Brokering refers to dynamic relocation of cloud workloads based on the lowest-cost platform at that time, whereas cloud bursting looks to optimise the cost and performance of an application at any time. For average use, an enterprise can pay for persistent usage in its own virtual machine (VM) environment, and it can use public cloud resources for additional capacity.

In 2011, the idea of dynamically sourcing and brokering cloud services based on real-time changes in cost and performance was the future vision of cloud’s pay-as-you-go pricing – and it remains a vision.

The first tools are only just emerging and the use cases are limited, especially since costs for public clouds don’t vary enough to drive significant brokerage demand.

Serving up real-time information for basic test scenarios and samples can support your own cloud advisor responsibilities for initial deployments, but not for porting provisioned workloads. The restrictions that limit one-time migration also apply to speedy movement of applications. Today, brokering tools help with strategic right-sourcing, but not for portability. 

In concept, bursting sounds like the best of all worlds. You pay for your average load with owned servers while using public cloud for usage peaks with pay-as-you-go pricing.

Read more about cloud bursting

The lack of capable infrastructure hinders a typical cloud bursting architecture. Choose one of these four paths to better implement cloud bursting.

Cloud bursting helps organizations use the public cloud to manage sudden spikes in demand. But what challenges or issues might it introduce?

In reality, bursting places a strain on your network, results in significant data-out charges, adds latency to your application, and requires the use of two identical clouds with matching templates.

Hosted private clouds in the same datacentre as a public cloud that uses consistent templates and platforms can make this a more feasible option. Many enterprises envision a multi-datacentre bursting scenario. Multi-centre bursting is possible and has been done by a handful of enterprise tech leaders, but the majority have a long way to go before they’ll adopt this functionality.

Interoperability: connecting your hybrid future

Behind every cloud story is an interoperability challenge. Every organisation seeks connectivity across cloud and non-cloud environments for authorisation, authentication, usage tracking, cost and performance optimisation, automation and process mapping.

In theory, interoperability presents enterprises with a choice on how and what they use for certain functions, using existing investments and picking from new best-in-class products.

Although suppliers often pitch heterogeneity value propositions, the list of compatible and current releases is short, especially across infrastructure, hypervisor and cloud platforms with varying levels of depth in interoperability and consistency. This makes it difficult to pick and choose your environment components and leverage some of your existing standards or investments.

Unfortunately the complexity doesn’t just stop at compatibility. Part of the hybrid challenge is navigating the logistics behind this connectivity that takes data-out costs, latency and long-term maintenance into account. Any integrations you do establish will likely cost you portability pain down the road.

Problem-solving for your hybrid cloud reality often requires a solid understanding of the economics behind different cloud environments, a complete mapping of your applications and their dependencies, a list of the data collected by your cloud providers, and a full view into the existing integration options that your providers offer.

Cloud service providers (CSPs) rely on adapters built by technology leaders for underlying and parallel functions. The more technology adapters published, the greater the customer base that each can serve. Like most ecosystems, it’s a symbiotic relationship between each provider. For you, it means less work and sustainable points of integration.

Any custom work a user completes is almost inenvitably out of date at the next version release, which can make interoperability and freedom from supplier lock-in time-sensitive. Each supplier has a published list of adapters that are maintained by itself and by its partners. Request this list from your providers.

Integration: using APIs for development

Most cloud platforms and management tools are based on Restful application programming interfaces (APIs), allowing their customers and partners to build custom integration points where necessary. As part of the evaluation process, question whether the APIs are published and ask about the completeness of the API catalogue and the notification process for API changes.

Developers are no strangers to the importance of APIs for the future of app development, driving new lines of revenue and the onset of an API management challenge.

For IT leaders going to the cloud, APIs provide points of connectivity to enable the hybrid, automated future through application-to-application messaging. Ultimately this means consistency across metrics, use of your preferred interface for management and developer interactions, and a holistic view on system health. Leading enterprises consider fully published Rest-based APIs a prerequisite for their cloud platforms, management tools and developer access points.

However, CSPs and cloud users alike struggle with API maturity and API management practices, making interoperability a challenge today.

For example, suppliers lack mature APIs. APIs should be complete, consistent and properly notify users of changes. Avid cloud users often criticise the state of API maturity across some of the major cloud providers. Issues include inconsistent results from a single API call, daily changes in APIs without notifications and core functionality missing from its library of exposed APIs. For this reason, CIOs should include explicit details about API maturity in cloud platform request for proposals (RFPs). But it’s not all supplier-side API issues.

Users sometimes create temporary fixes through custom work. Early cloud deployments were sectioned off as tech islands with minimal amounts of connection points. They typically treat APIs as one-off problem solving rather than a permanent organisation-wide API design plan that looks to strategically manage and navigate specific customisations.

Even if supplier API changes include significant notification, adjusting to undocumented and unknown API connections is difficult, resulting in poor performance and service quality. Design an API management strategy that includes design principles, standardisation and a management tool that benefits from app-to-app messaging and reacts to real-time changes.

It’s easy to make generalisations about cloud cost savings, supplier lock-in, portability design and/or cloud security, but these subjects are rarely all good or all bad. As you adopt cloud, understand the nuances of each option. Cloud can save you money in certain circumstances, but can include costs that may be obscured without a full vision of your workload. n

This is an extract of Forrester’s report, “The state of the cloud: migration, portability and interoperability, Q4 2015”  by Forrester principal analyst Lauren Nelson.

Read more on Virtualisation management strategy

CIO
Security
Networking
Data Center
Data Management
Close