Tierney - stock.adobe.com

Hybrid cloud: Weighing up what to run in the cloud and on-premise

The hybrid cloud model promises a lot, but enterprises really need to take their time when working out what applications and workloads to run where – or they will find themselves on a hiding to nothing

Within large, enterprise IT environments, senior management are increasingly pushing their IT department to pinpoint applications and services that could be pushed off into the cloud for performance, cost and resiliency reasons. But how do you go about deciding what should run where, and which workloads might be better off running where they are now?

In some instances, these decisions may lead to enterprises embracing a hybrid cloud setup, whereby they retain some applications and workloads on-premise, while others are handed off to run in the public cloud.

When done right, hybrid cloud enables enterprises to tap into the best of what on-premise and public cloud have to offer in a highly dynamic way.

But the tricky bit is deciding how the parts of the puzzle – servers, services and the like – fit together, and which parts are best suited to life on-premise and which parts in the cloud.

When making these decisions, it may sound obvious, but the first thing to consider is the application’s suitability for running in the cloud – and not just from a cost perspective.

To most people, “a server is a server” – but it doesn’t always work out that way. For example, non-mainstream hardware that is only provided by a small group of specialist cloud providers is rarely a good fit for the cloud, and neither is legacy infrastructure.

Some examples of these include AS/400, RISC-based systems, and others. Of course, the field of available providers will be very much reduced based on our next point – data locality.

This is an extremely hot topic and a contributing factor as to why cloud-based providers offer multiple geographical locations. Depending on where the business is located in the world, there are differing laws and complexities around what data can be stored in-region and out-of-region.

A prime example of this is the General Data Protection Regulations (GDPR) and the importance it places on data locality. Failure to adhere to these rules – and keeping customers informed as required – can result in huge fines.

Similarly, the application infrastructure should be geo-located as close as possible to the application consumers. This is good practice because it helps the application performance for the end-user.

The fragility and complexity of the application or system being considered for migration is also another important consideration, and one that a lot of people either forget about or hope that moving to the cloud will remediate.

There are a number of issues that can affect the decision to move such systems. The more complex the system, the more resources it consumes, as well as being more fragile.

Read more about hybrid cloud

Another factor in this discussion is the age of the system. We are not just talking about servers that are a few years old, but applications that are running extremely old legacy code.

From personal experience, I have seen relatively modern operating systems shoehorned onto legacy infrastructure written on a long-dead language or application platform.

The long and short of such systems is that if they are working as they should, leave them alone.

Otherwise, moving such systems to the cloud can mean changes being made to what could be called antique applications. Such nasties can include licensing agreements tied to specific hardware, or whose terms differ when running on cloud-based infrastructure compared with on-premise deployments.

Problems can also occur as a result of hard-coded environmental configurations, such as hostnames and IP-related information, or applications that require some form of direct hardware-level interaction.

In short, such applications should be left alone, as modern applications are typically a much better fit for the cloud environment.

The best remediation path (and potential path to the cloud) is to re-architect the system. This may sound onerous, but bear in mind that it provides a clean-ish sheet that allows the designers and developers to architect around the dynamic nature of the cloud and the cost efficiencies that can come with it.

Hand in hand with the fragility of the application is tending to the interdependencies that exist within the infrastructure before migrating to the cloud.

The infrastructure often consists of many interconnected services. It is only an administrator who does not value his or her job or sanity that embarks on such a move without checking (and documenting) all the interactions between the application and supporting services.

And untangling this web can be quite an undertaking. If there are too many back-and-forth transactions between the on-premise site and the cloud, it can severely degrade application performance across the board.

The security aspect is vastly overlooked and often underfunded. Moving into the cloud means moving into a shared infrastructure. Although isolation is usually provided by virtue of the underlying hypervisor, there have been known instances of various exploits being available that can allow hypervisor escapes.

On a more practical level, depending on your agreement with your customers, you may have a legal requirement to use infrastructure as a service (IaaS) rather than shared platforms, such as shared databases.

Counting the costs of cloud migration

It is vital to ensure that any cloud migration projects are fully and accurately costed before they are begun, and that these calculations cover the lifetime of the application.

It is also important to develop (or search for) a framework that will help with all the issues around developing a consistent and systematic approach to evaluating the suitability of an application earmarked for moving to the cloud.

This plan should also address not only the risks versus rewards, but also consider other matters, such as downtime and migration costs, availability windows, plus data and system integration testing.

Monitoring and reviewing performance is also important. It is vital to ensure that performance is of a similar or better performance, and backed up with before-and-after statistics.

Don’t rely on people’s perceptions. There are plenty of tools that can be used to address these monitoring requirements.

An example of unrestrained movement of applications is that if an application, or part of an application, is migrated across to the cloud, the backup and disaster recovery arrangements do not stay the same.

Disaster recovery in the cloud is an entirely different prospect because of the way cloud infrastructure works. Such a system would have to be re-architected to fit the changed application profile.

Going into the cloud also means giving up a certain level of control. No matter which cloud provider is chosen, there will be downtime where the cloud provider needs to update its hypervisor. The cloud service consumer gets no say in when that happens.

Therefore, the ability to maintain the application may well require a certain amount of re-engineering to cope with it, and such skilled experience is not cheap.

Moving forward, many companies have developed a cloud-first strategy whereby they are asked to prioritise running applications in the cloud, or all new technologies purchased must feature a cloud component unless there are proven reasons why such solutions are not a good fit.

The idea is that, in time, the volume of cloud-based infrastructure will increase as old legacy systems are retired or replaced, which is a sound strategy for anyone looking to take a slow, but steady, route to the cloud.

Next Steps

Cisco readies portfolio for hybrid cloud, modern data center

Read more on Platform-as-a-Service (PaaS)

CIO
Security
Networking
Data Center
Data Management
Close