Gorodenkoff - stock.adobe.com
More than a decade has passed since off-premise services first came to market, and the accompanying ecosystem has matured to a point where many enterprises are now adopting a cloud-first strategy.
This has been accompanied by a shift in emphasis away from the datacentre being the central focus of enterprise computing, and has implications for the way IT is managed, as well as provisioned.
In the early years of cloud, enterprise customers were wary of this new delivery method for IT services. Development and backup were among the first use cases for cloud. But as more and more capabilities came online from pioneers such as Amazon Web Services (AWS), cloud platforms started to gradually win organisations’ trust.
Over time, new applications and services emerged that were cloud-hosted from the outset, especially those involving large-scale deployments or those that may need the flexibility to scale up or down relatively quickly.
Today, cloud services have become an accepted part of an organisation’s IT portfolio, with analyst Gartner predicting that, by 2021, more than 75% of mid-size and large organisations will have adopted a multicloud or hybrid IT strategy, where resources are spread across one or more public clouds as well as a company’s own datacentre.
There are even some in the industry that believe public cloud will eventually replace on-premise infrastructure entirely, although this is not likely to happen in the near future.
Clarifying the meaning of cloud-first
Adopting a cloud-first strategy does not mean that only public cloud platforms should be considered when commissioning new IT services or resources. Rather, as with the UK government’s cloud-first policy, it means organisations should evaluate cloud offerings for suitability before considering other options.
But as the Government Digital Service itself pointed out, different organisations face different sets of challenges when it comes to cloud, and so there is no one-size-fits-all answer. This means on-premise infrastructure may still turn out to be the best choice for deploying an application, depending on a number of factors, such as latency or an organisation’s attitude to control over its applications and data.
This new reality seems to be leading to a subtle shift in the way that many organisations view IT infrastructure. Some are coming to regard cloud and datacentre resources as largely the same and possibly even interchangeable on a certain level – although things get messier if you drill down into the technical details.
“Enterprises would love to regard their overall platform as a single resource pool, being able to dial things in and out as they please,” says independent analyst Clive Longbottom.
“Public cloud is moving in that direction, and this allows enterprises to take things like systems monitoring and orchestration tools as SaaS [software as a service] without having to bother about how much resources such ‘dead’ applications [those that add no value to the business] take up.”
This has several implications, especially in the management layer, where new tools and approaches may be required to meet enterprise customers’ expectations. In fact, there has already been a trend over the past few years in which the management plane has been migrating from the datacentre to the cloud.
Cloud providers and their shifting strategies
One effect of this shift in thinking can be seen in a change in strategy at many of the large public cloud providers. Whereas AWS, in particular, used to assert that all IT functions would, before long, migrate to the public cloud, it and the other cloud giants seem to have accepted that a hybrid environment will be the norm for the foreseeable future, especially where large corporates are concerned. Consequently, they have started to deliver products and services tailored for this reality.
Amazon’s recent introduction of AWS Outposts is one example. This enables organisations to deploy a rack of AWS infrastructure into their own datacentre or colocation facility. It provides the same services, tools and application programming interfaces (APIs) as the full AWS cloud, but enables companies that deploy an AWS Outpost to store and process customer data on-premise.
Each Outpost links to the nearest AWS Region, so that the hardware and AWS services are managed, monitored and updated by AWS just as if it were an integral part of the cloud.
Amazon is not the first company to do this – Oracle has had a comparable offering, known as Cloud at Customer, for several years, while Microsoft’s Azure Stack provides a similar ability to run Azure services on-premise, but allows customers more flexibility in the hardware configuration that it runs on.
What solutions like this show is that the big cloud companies are looking for ways to capture those datacentre workloads that organisations are reluctant to move off-premise, perhaps for reasons of data residency or because those workloads are considered mission-critical. Once deployed, of course, they serve as a springboard to the provider’s public cloud platform for future applications and services, which are increasingly likely to be cloud-native.
Google’s Anthos, unveiled last year, takes a slightly different tack. It provides a platform based on software containers and the Google Kubernetes Engine (GKE) that enables organisations to develop workloads that can be deployed equally well on-premise or on the Google Cloud. It can also be deployed on other public clouds, making it a true multicloud platform.
Read more about cloud-first strategies
- For the past four and a half years, the Department for Work and Pensions has been evolving into a cloud-first organisation. We find out how it has been getting on.
- A review into the continued relevance of the UK government’s six-year-old cloud-first policy has concluded its ‘brand recognition’ is too strong for it to be scrapped, according to GDS.
A notable trick up Google’s sleeve is the Anthos Migrate tool, which is claimed to be able to migrate existing enterprise workloads from virtual machines (VMs) into containers running in GKE. This appears to be aimed at providing enterprises with a pathway to ditch VMs in favour of a fully cloud-native application infrastructure.
But, as has been pointed out many times in the past, much of the value in cloud services lies in the management layer, and Microsoft in particular is showing the way here.
At its Ignite conference in late 2019, the firm lifted the lid on Azure Arc, a set of technologies that extends the management capabilities found in its Azure cloud out to resources found on-premise or in other public clouds.
Specifically, Azure Arc supports Windows and Linux servers, as well as Kubernetes clusters, wherever these might be running – even if that is on rival cloud platforms such as AWS or Google Cloud. It can also manage Azure services running on Azure Stack hardware – naturally.
Essentially, what Microsoft has done is to make its Azure online customer portal and the Azure Resource Manager (ARM) into a single point of control capable of managing a significant swathe of the IT resources that many organisations will have in their portfolio.
This is the kind of capability that will draw interest from many enterprise IT managers, but a downside is that the organisation has to be a Microsoft customer with an Azure account. A great many organisations already run Windows-based infrastructure and have an enterprise agreement, but some may baulk at tying their future cloud strategy to Microsoft. Some may instead seek third-party solutions to provide cross-cloud management features, such as the Accenture Cloud Platform (ACP).
Cloud skies ahead
Some industry observers still believe that enterprise datacentres will, ultimately, be displaced by cloud. The reason for this is the sheer cost of buying and maintaining the physical infrastructure required for a modern datacentre.
“The self-owned datacentre is becoming an albatross around the enterprise’s neck,” says Longbottom. “Are the physical needs going to grow, shrink or stay the same? What should/must stay in there, and what should/could be moved to public cloud? What about cooling, power distribution, high availability, redundancy, and so on? All of these become significantly less of an issue when using cloud computing.”
However, there have, equally, been instances of organisations repatriating workloads that had previously been farmed out to the cloud, and cost is often cited as the reason for this. Cloud providers are often criticised for lack of transparency in pricing that can make the long-term costs of a cloud project hard to calculate.
This can mean that if an organisation simply lifts and shifts a complex enterprise workload into the public cloud without refactoring it, it could be hit with unexpected costs, for data transfers, for example, or something as simple as servers racking up metered charges by running 24x7 instead of only when needed.
Such issues can be avoided by redesigning applications using cloud-native technologies, and Longbottom says that serverless computing, in particular, could lead to public cloud platforms coming to dominate IT in future.
Serverless sees developers use discrete functions and services to build a workflow, with these running only when triggered by some event, such as a new message hitting a message queuing service.
“As next-gen serverless computing comes through – throw a workload at a cloud as a set of inputs, and it will spew out the desired outputs back at you – such resource-centric compute will become not only possible, but more of the norm,” says Longbottom. “The private datacentre cannot do this: it does not have the scale of shared workloads that a large public cloud has to make it work effectively – and unused resources still have to be paid for.”
For now, though, the scenario of multiple public clouds plus the enterprise datacentre remains the most common model for most businesses. The challenge is to develop a control plane capable of managing all of this effectively.