Sergey Nivens - stock.adobe.com
Legacy systems, security requirements and compliance issues are some of the numerous factors that prevent organisations from going “all in” on public cloud. That’s even before the challenge of managing multiple public cloud environments is taken into account.
The result is companies often end up using hybrid cloud infrastructure, which mixes public cloud with the non-cloud-ready aspects of existing IT infrastructure. This helps provide greater flexibility, control and scalability.
While hybrid cloud might not have been in the original plan, most organisations tend to prefer this approach once it is embraced.
When done properly, hybrid infrastructure represents the best path to efficiency, cost savings and performance.
Hybrid solutions should mean that workloads will automatically move to the most optimised and cost-effective environment, based on factors such as performance needs, security, user demand and traffic.
Clearing technical debt
For many organisations, however, a key challenge when moving towards an “owned” architecture model – one that they can take full ownership and control of – is that they simply don’t understand what is running their legacy systems.
For example, those with mainframes that were installed 10 or 20 years earlier, and were coded using languages such as Cobol, may struggle to find developers with the relevant skills.
In some cases, this may even erode the health of the very systems these institutions rely on, creating security issues.
This practice of relying on outdated code and systems – usually due to the perceived cost and hassle of updating them – is commonly known as technical debt.
Many organisations find it easier to stick with existing vendor contracts to “keep the lights on”, rather than face the prospect of redesigning their infrastructure to be more agile.
Small cloud bites
Often, the key to unpicking this complex technical debt and moving towards an effective and sustainable hybrid cloud infrastructure is to take a planned and staged approach.
Just as in the days of three-tier architecture – where IT professionals would split out applications into the top-tier database, middle-tier application knowledge and capability, and web systems on the front end – the same approach can be applied to building a container-based system, which breaks up applications into smaller pieces of functionality, that spans private and multiple public clouds.
By splitting the application in this way, it is possible to gain greater control over the architecture and to make technical decisions based on the needs of the business, rather than the limitations of the system.
For example, the IT department may decide that it makes the most sense to put the web-based part of the application on the public cloud, and the application logic in the private cloud, while hosting databases securely in its own datacentre.
As developers look to drive agile integration to solve business problems, this hybrid approach allows them to better manage risk and keep costs low, while using the capabilities of cloud suppliers in the most effective way.
Furthermore, as any of this can be changed at any time depending on need, it will also have the benefit of avoiding future technical debt as small changes and updates to the applications will be carried out frequently.
This is in stark contrast to the old waterfall approach, where applications were only updated annually at most, increasing the likelihood of knowledge gaps and errors within an organisation.
Hybrid or public?
For some organisations considering overhauling their systems, the question of why they should bother with a hybrid environment at all remains pertinent.
Most have learnt the lessons of the early days of cloud computing, such as vendor lock-in and problems with suppliers going bust or introducing sudden price increases. Many view these as outdated concerns in the modern environment.
It’s now far easier to rely purely on the public cloud, spreading risk across numerous providers. However, this will only really work for smaller organisations or those with simple, low-risk data needs.
Adrian Keward, Red Hat
With any large-scale environment, the ability to gain an overview of resources and how they are being used becomes increasingly difficult as more platforms are introduced. In many cases almost impossible without the introduction of a manager of managers (MoM) platform.
Without any level of ownership over the cloud architecture, and with no real-time overview of how and where capacity is being used, it becomes difficult to manage costs and react quickly to changing business needs.
Another thing to consider – particularly in today’s public clouds – is openness. Just because a cloud provider has built its public cloud platform on open standards and technologies, that doesn’t mean it delivers on the promise of a truly open platform.
It is crucial that the cloud enables portability, migration and heterogeneity, and doesn’t induce eye-watering fees to move workloads across the hybrid continuum. This is the “open” hybrid cloud approach.
No one size fits all
Ultimately, on any journey to develop a hybrid cloud infrastructure, it is vital to remember that every cloud is unique. While it is important to understand the basic principles of building an interconnected and agile cloud environment, it is equally important to understand that private clouds are one-of-a-kind and there are thousands of public cloud providers.
There’s no one-size-fits-all solution to building a hybrid cloud, and the way an organisation builds theirs will be as unique as a fingerprint. Small steps led by business needs, the ability to pivot quickly and a strong partner will be crucial to navigating this complex landscape.
Read more about hybrid cloud
- The hybrid cloud model promises a lot, but enterprises really need to take their time when working out what applications and workloads to run where – or they will find themselves on a hiding to nothing.
- VMware CEO Pat Gelsinger points out three laws that will make the case for the hybrid cloud future: economics, physics and data sovereignty.
- Two of the big three public cloud providers – Amazon Web Services and Microsoft Azure – offer gateway on-ramps that can speed the deployment of hybrid cloud operations for SME and remote office locations.