zapp2photo - stock.adobe.com
The pace of digital transformation has notably picked up in the past decade, as enterprises invest in technology to retain their competitive edge and avoid having their market share eroded by disruptive newcomers.
Organisations’ ability to out-innovate their competitors in this way often requires a full-scale modernisation of the IT infrastructure stack underpinning their operations so they are better positioned to respond to the changing needs of their customers.
For many enterprises, this process of modernisation has seen them look to invest in making their private, virtualised datacentres and server rooms more agile, responsive and easier to manage by investing in software-defined networking (SDN) technologies and automation tools.
Such investments can help enterprises make better and more efficient use of their existing compute capacity, but that alone may not be enough to stave off competitive threats, prompting some IT leaders to weigh up a move to the public cloud.
The benefits of such an approach are well-documented and proven, with the public cloud offering enterprises ready access to an almost infinite supply of cloud-based compute resources that can be set to auto-scale in line with peaks and troughs in demand, meaning enterprises only pay for what they use.
In some instances, enterprise CIOs may have one eye on potentially moving their entire IT infrastructure stack to the public cloud. Leading them to ponder if it might be worth their while going “all-in” on one supplier’s cloud or splitting their workloads and applications across different supplier environments.
This may be because certain workloads are more cost-effective to run in the Amazon public cloud, for example, whereas there might be others that make better sense from a performance perspective to run in Microsoft Azure.
A wholesale move of an organisation’s entire IT infrastructure to the public cloud may not be an option for some CIOs, as they may need to retain some workloads and application on-premise for the foreseeable and operate a hybrid IT setup.
The creeping complexity of infrastructure sprawl
Whatever course of action CIOs decide to take, the fact of the matter is IT administrators now face an unenviable task of having to keep tabs on workloads and applications scattered across multiple on-premise and cloud environments.
At the same time, the make-up of the workloads that IT administrators are managing has also become more complex, as enterprises ready their application estates for the move off-premise by embracing the principles of cloud-native design into their software architectures, meaning they must embrace containers, microservices and serverless.
In enterprises that are running a hybrid IT setup, IT administrators will face the additional challenge of balancing the management requirements of these newer, cloud-based workloads with their legacy, monolithic, on-premise applications.
In a research note, published in late 2019, Ross Winser, senior director analyst at market watcher Gartner, stated that the increasing prevalence of hybrid IT environments meant IT administrators would have to rethink their approach to workload management.
“Traditional infrastructure and operations [I&O] tools are quickly reaching their limits when tasked with such a wide array of data-processing venues,” said Winser, in the research note.
“Today’s I&O professionals must be willing to move beyond legacy practices and mindsets to embrace trends that will profoundly impact I&O teams and the capabilities they provide their business.”
Expanding on this theme, Tim Beerman, chief technology officer (CTO) at hybrid cloud-focused managed service provider Ensono, says the approaches and technologies IT admins have traditionally used to manage their infrastructure stacks are simply no longer up to the job
“Previously, cloud application management has been a simple infrastructure provision. This limited approach cannot manage complex resources in real time nor ensure optimal, dynamic application performance,” says Beerman.
“You need something that integrates people, processes and technology across multiple platforms – combining it into one useful space, for everyone’s benefit. This real-time data and centralised system can often give businesses that competitive edge, allowing them to be on top of their cloud-based, business-critical IT systems at all times.”
Mark Lee, director at UK-based converged ICT services supplier GCI, also makes the point that “any cloud management platform or toolset needs to efficiently manage all environments, including on-premise systems as well as private and public cloud services”.
He continues: “If you map out that journey in a bit more detail, it starts with having the ability for engineering teams to be able to quickly connect hypervisors, networks, storage and other technologies to create private clouds.
“They also need to manage public cloud accounts, so it’s important to look for solutions that can integrate with existing technologies without the need for lots of scripting or plug-ins,” adds Lee.
And that is not all. Aside from allowing admins to manage multiple workloads across different environments, any management platform that is deployed also needs to offer admins visibility into the inner workings of their infrastructure, from a cost and data analytics perspective, says Krishna Subramanian, chief operating officer (COO) at data storage management software provider Komprise.
Krishna Subramanian, Komprise
“Less than 20% of enterprises take advantage of cost savings options in the cloud for two reasons: it’s very hard to get visibility into cloud costs; and even if you invest in a separate cloud cost optimisation tool, it mainly gives you reports and it is very hard to manually act on the reports,” she says.
“To effectively manage cloud costs, you need actionable analytics – a solution that provides visibility and automates the management based on the visibility.”
Furthermore, any cost analysis these platforms provide must go into granular detail because cloud pricing is typically more complicated and multi-layered than its “pay-as-you-go” reputation suggests.
“For example, when looking at cloud storage costs, you have to not only look at the storage costs but also at API [application programming interface] costs, egress costs, retrieval costs, network costs,” says Subramanian.
“The biggest mistake organisations make is in taking just one dimension – typically the storage costs – and not factoring in the rest. By providing visibility into your cloud spend followed up by actionable automation, you can help identify the problem and solve it.
“For instance, storage is the second-highest spend in the cloud. Analytics-driven data management software cuts 50% of the storage spend by showing you where the costs are and by moving data to cheaper storage classes intelligently based on your policies,” she adds.
And that process needs to be automated and led by artificial intelligence (AI), for example, because the scale of the task at hand would be almost impossible to do manually, says Subramanian. “Manually moving data across cloud storage classes is virtually impossible because you are talking about tens of millions of objects across thousands of buckets and hundreds of accounts.”
Introducing intelligent workload management
As alluded to above, workload management is a job that has grown in complexity and scale in recent years, to the point that IT administrators need to draw on automation, artificial intelligence and machine learning tools to extend their reach.
The concept is known as intelligent workload management, which essentially boils down to IT admins relying on artificial intelligence and machine learning tools that can pinpoint and self-regulate the use of computing resources within on-premise and cloud-based environments that might not be working as efficiently as they should.
It is far from a new idea, particularly in the datacentre infrastructure management space which has seen a slew of AI-powered technologies rolled out to market over the past decade to assist operators with regulating a whole host of metrics within their facilities.
According to Jad Jebara, CEO of datacentre infrastructure management (DCIM) software provider Hyperview, demand for this category of intelligent workload management software is booming at present, due in no small part to the Covid-19 coronavirus pandemic.
Part of this is because the introduction of the social distancing restrictions across the world means fewer people are being allowed into datacentres, so operators need to find new ways to manage their facilities remotely.
“Cloud-based DCIM tools, in particular, are being sought as they provide I&O teams based in different geographies the ability to manage and monitor remotely,” says Jebara.
“Power, rack, floor and compute (physical and logical) capacity all need to be closely monitored to maximise cost and performance efficiencies. Furthermore, automation and machine learning that is at the heart of cloud-based DCIM software helps to significantly reduce the instances of human error that are too often the root cause of downtime and outages,” he adds.
“I&O teams want peace of mind in managing the cost and risk of their IT infrastructure, and confidence in key data to make sound decisions.”
Digging into the datacentre with DCIM
One high-profile example of intelligent management tools being used to great effect in datacentre environments is the experimental work Google has been doing in using machine learning to optimise the cooling of its datacentres, and – in turn – cut its energy usage.
From an IT stack perspective, the notion of using intelligent workload management tools to help IT administrators make better use of their networking, compute and storage resources across different IT environments has been talked about for more than a decade.
Before its subsequent acquisition by Micro Focus in 2014, software giant Novell set out plans to restructure its business around the provision of intelligent workload management tools way back in 2009.
At the time, it said the decision to pivot the business in this way was to serve the needs of enterprises wanting to bolster the security and portability of workloads across physical, virtual and cloud-based environments.
Intelligent workload management also became a core component of VMware’s vRealize Operations software platform, which is used by IT admins to manage their cloud and on-premise infrastructure operations, in 2014.
At the time of its release, its inclusion was geared towards helping IT admins ensure the applications and workloads running within their virtualised datacentres were doing so in a cost- and resource-efficient way.
In subsequent releases, the intelligent workload management capabilities of vRealize Operations have broadened out considerably, as the range of environments that IT admins are now expected to look after has grown.
Whereas the remit of IT admins may have previously and predominantly focused on managing in-house, on-premise datacentre resources, the reality of the situation in 2020 is that the range of environments they are now responsible for is far broader.
To the point that expecting them to do this job – and do it well – without a little AI-powered assistance seems almost an unreasonable expectation.
To this point, vRealize Operations is now described by VMware as an AI-based support for IT admins who need help managing applications and workloads across private, hybrid and multicloud environments.
The intelligent workload management space is still in its relative infancy, but a few other players have risen to prominence in recent years, including Turbonomic (previously known as VMTurbo) and Densify.
Turbonomic’s flagship offering is an AI-powered, self-managed platform designed to help IT admins optimise their workloads, while providing feedback on how various IT resources – including compute, networking, storage, containers and virtual machines – should be tweaked to get the most out of them. Actioning these changes can either be performed directly or automated, according to the tech team’s preferences.
The Densify platform, meanwhile, uses machine learning and policy-driven analytics tools to help guide IT administrators on what infrastructure resources need tweaking to keep the cost of running their applications and workloads – in the cloud and on-premise – in check.
In a similar vein is Cisco’s Workload Optimisation Manager, which can be combined with the firm’s AppDynamics offering and uses automation to “intelligently” allocate infrastructure resources to the applications that need it, while using AI to proactively pinpoint problematic areas within the stack that might need tweaking for performance reasons.
There are also offerings from suppliers that are concerned with monitoring a specific part of infrastructure. These include Cisco-owned ThousandEyes, which is a software-as-a-service (SaaS) portal that keeps close tabs on an enterprise’s networking infrastructure, while troubleshooting any application delivery issues that may emerge as a result of connectivity issues.
Careful consideration required
Given the variety of tools, technologies and approaches that enterprises can take, it is important that CIOs take the time to properly assess their options, says Dave Locke, chief technology advisor for Europe, the Middle East and Africa at IT services provider World Wide Technology (WWT).
“Intelligent workload management tools are now comprised of a diverse ecosystem of vendors, where only the integration of multiple tools gives enterprises a sufficiently intelligent view of their workloads,” he says.
Dave Locke, World Wide Technology
“A workload is essentially made of three layers: the application; the user data; and measurement metrics. Effective modern-day workload management relies on monitoring a combination of digital experience, infrastructure performance, and application performance.
“If the tools you deploy to measure these elements don’t work together smoothly, then you are undercutting the very thing you set out to solve – the improvement of customer experience,” adds Locke.
CIOs can overcome this issue by engaging the services of an integration partner, which can meld together technologies concerned with managing different parts of the infrastructure and different application types to provide the technology team with a more complete view of what is going on, he suggests.
“For example, the combination of Cisco’s ThousandEyes, AppDynamics and CWOM [Cisco Workload Optimization Manager] software add up to a more complete picture of digital experience monitoring, infrastructure performance and application performance,” says Locke. “All three of these are important in accurately understanding where workloads could be optimised or moved.”
For all these reasons, Alex Chalkias, product manager at enterprise-focused open source operating system software maker Canonical, says it is vitally important that CIOs take their time before rushing into any intelligent workload management deployment.
“When deciding which intelligent workload tools to use in cloud environments enterprises should ask themselves questions about application management, and not configuration management,” he says.
“For instance, where should the application run? Which software components should be included in the scenario? What resources should be allocated, and how should that scenario be integrated into the wider estate? The answers to these questions define the business intent of running the workloads and are far more important than the details of a particular configuration file.”
Read more about datacentre systems management
- With enterprises becoming more mindful of how sustainable their IT consumption habits are, Google outlines its work to ensure the datacentres underpinning its cloud platform continue to push the envelope on energy efficiency.
- The potential for artificial intelligence (AI) to cut the power consumption of datacentres is an area of growing interest in the industry, as operators seek new ways to reduce costs and drive up the performance of their facilities.