How to achieve optimum performance in a hybrid cloud

As hybrid cloud systems become increasingly popular, we look at how to implement them and the advantages they can offer

If you are investigating, or already implementing, a cloud computing platform, the chances are that you are looking at a hybrid platform.

A hybrid cloud uses a mix of private and public clouds to provide the overall IT platform – and this brings several issues that ideally should be dealt with before you embark on your cloud journey.

It may be tempting to start from where you are in-house, looking to a VMware-based private cloud system, or looking to the use of a public cloud that you feel better matches your existing internal platform, such as using Microsoft’s Azure public cloud.

However, taking such views could lead you down into a hybrid cloud platform where, at some stage, you find yourself needing to back all the way out and replace it wholesale – and the business will not thank you for that.

Having said that, it is possible to run an IT system spread across dissimilar private and public clouds. However, the capability to take workloads from Cloud Platform A and move them in real time to Cloud Platform B may be compromised. It is better to look for a basic underlying standard to the platforms to make such workload movement easier. This may be possible if you choose public cloud providers that are also on the same private cloud you have chosen – however, the uptake of supplier-specific platforms such as VMware’s vCloud in the public cloud space has not shown as much traction as we have seen with OpenStack, and Azure has shown distinct weaknesses in its overall availability over the past year.

Therefore, if you decide that your private cloud is going to be one from a supplier specialising in private cloud platforms, you may be constrained in the number of external cloud providers you can use effectively. Flipping the coin, it is probably better to look at the public clouds that you are thinking of using and then choose a private platform that has a high degree of standardisation with them.

The reason this capability to move workloads around is so important is because a hybrid cloud needs to be dynamic. The idea is that you start with workloads you are happy to put into the public cloud, keeping other workloads in your private cloud environment – or even in a less virtualised environment. It makes no difference why you choose which workloads stay under your own control – it may be technical issues in how to migrate from the physical to the virtual, or more visceral fears over public cloud performance or security. Whether these fears are real or imaginary, they have to be accepted as being valid at that point in time.

Increased maturity in cloud platforms

As you and your business get more used to how the public cloud works, and there is an increasing level of maturity in your private and public cloud platforms, you will probably begin to have greater trust in the public cloud. You need to be able to easily move workloads from your private cloud to the public cloud as and when you feel the need. Just as importantly, you also need the capability to bring workloads back in-house quickly and easily.

This is because it is always possible that your chosen cloud provider could go bust or you might want to cease your relationship with it. Setting up a new relationship with a new cloud provider and then moving workloads from the old to the new platform may not be quick enough – far better to bring them in-house as an interim measure until all the new agreements can be ironed out with due diligence applied.

It may also be that the perceived benefits of moving a workload out to a public cloud don’t quite stack up – performance may not be as expected, for example. Again, rolling back to a known position should be seen as a basic requirement – and standardisation of how the underlying cloud platform deals with workloads makes this far easier.

However, just choosing OpenStack, for example, as your basic platform is not the silver bullet. Tools will be required so that workloads can be moved, monitored and managed in an effective way. Companies such as CA, BMC and Virtustream offer a new generation of tools for systems management that can span across private and public cloud platforms.

It may well be that choosing a cloud broker or aggregator is also an option for you. For example, Cisco’s cloud is being implemented as a mix of its own and partner clouds all working as peers. By choosing a Cisco or partner cloud, it should be possible to move workloads around in the overall cloud ecosystem in an easier manner. Dell, which finally decided not to “own” its own cloud, will act as an aggregator of its partners’ clouds and will be the “one throat to choke” in contracts, ensuring that certain workloads operate within service level agreement terms.

On the downside, the drive towards platform homogeneity has not had a good history. Throughout the 1990s, there was a move by many organisations to try to zero in on a single platform. The explosion of different versions of Unix, alongside mainframes and different proprietary midi systems, as well as Novell Netware and the emergence of Windows-based systems, had led to a massively complex and expensive IT estate that was failing in its main aim to support the business.

However, attempts to homogenise to a Windows or Linux back end did not always provide the benefits that were hoped for. It was obvious that different workloads required different resources at the server, storage and network levels – and that a one-size-fits-all strategy was not up to this.

It is more than likely that the same will be seen with cloud. A pure scale-out strategy based on commodity x86 architecture may be okay for a large proportion of workloads, but there will always be others that require something different to provide the optimal level of business support.

Specialised cloud environments

This may point to a need for a more specialised cloud environment – such as IBM’s SoftLayer-based systems. IBM’s public cloud provides what it calls “bare metal” resources – you get the benefits of a dynamic resource base as well as the performance of a physical system. You can choose different storage types – Sata or SSD-based systems – as well as adding in specialised central processing units (CPUs), such as GPU servers. You can expect to see Power CPUs added to the mix in the future, as Power is already being used for IBM’s Watson on SoftLayer services. Workloads can be moved from the bare metal configurations to virtual servers and back again as required – and there is OpenStack support built into the system as well, either at a build-it-yourself or at a full platform level. 

There is an emerging means of dealing with heterogeneity across cloud platforms, though, and this may be the way that ends up being the choice for the mainstream organisation. “Containers” can package everything that is required for a workload into a virtual image that is more platform independent. The prime supplier in this market is Docker – although Parallels has been using the approach for some time with its Virtuozzo Containers – and there is increasing support for Docker containers within cloud providers.

With the possible proliferation of images, virtual machines and containers that can occur across a hybrid cloud, a full means of auditing live systems needs to be in place to ensure that compliance with licence contracts is maintained. Companies such as Flexera Software and Snow Software provide tools to manage and advise on optimising licence management that can span across a hybrid cloud environment.

As part of optimising the cloud environment at both a resource and licence level, it is also necessary to monitor for orphan images – those that are still live, but are supporting no useful workload. Such images should be spun down and decomposed to regain the resources and licences being used by the image.

CA, BMC and Virtustream can manage such an environment, as does IBM through its on-premise systems management portfolio of Tivoli or its SoftLayer cloud-based system of management tools and application programming interfaces.

Clive Longbottom is research director at analyst Quocirca.

This was last published in February 2015

CW+

Features

Enjoy the benefits of CW+ membership, learn more and join.

Read more on Server virtualisation platforms and management

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchCIO

SearchSecurity

SearchNetworking

SearchDataCenter

SearchDataManagement

Close