Over the past few years, enterprise IT has been looking at scale-out IT platforms. The idea here is that a computing workload can be dealt with by throwing more resources at it – and these resources need only be basic, commodity systems. Many existing IT platforms are based on virtualised Intel or AMD servers, with different storage systems attached to deal with different I/O (input/output) needs. But this is changing.
IT providers too seem to be changing their minds now. The realisation is dawning that an online transaction processing (OLTP) workload does not use the same resources as big data, number-crunching or communications workloads. Different workloads may need different technology stacks – and in the world of cloud, this can cause issues.
Not only do different workloads require different basic resources, but hybrid cloud – where workloads can be on private or public clouds – also needs far more intelligent workload management. Wrap all of this up with DevOps, and the need to ensure that workloads can be tested and implemented automatically – in the right place at the right time – means many existing tools are just no longer up to it.
IBM has long been a company that believes in the need for different platforms for different workloads. Its mainframe/Power/Intel strategy gave the biggest spread of capability of any company, and its Tivoli management portfolio provides tools to manage the deployment and management of workloads. But having divested the Intel side of the business to Lenovo, IBM is having to position Power as its mainstay – and this may cause it problems in the short term, as existing customers that depend on Microsoft look elsewhere for deep Intel/AMD capabilities.
HP still has Itanium – an Intel chip designed to deal with a wider variety of workloads than the mainstream Xeon IC. HP has worked to port different operating systems onto Itanium, and now runs HP-UX, OpenVMS, NonStop OS, Gentoo and Suse Linux, and Bull GCOS on it, alongside Windows workloads. Alongside HP’s standard Intel and converged systems platforms, HP can provide a multi-faceted workload platform. Like IBM, HP has an extensive portfolio of products to provide workload automation and systems management.
Dell has long been an Intel devotee. Its PowerEdge range of servers and blades was built around Intel architectures. Dell has bought a number of systems management and workload automation suppliers over the past few years (Quest, Kace, Scalent, Boomi, Enstratius) resulting in its Active Systems Management (ASM) product, and has also introduced graphic processing unit (GPU) accelerators and Atom processors to its PowerEdge range – particularly in its blade systems.
Intel, meanwhile, is betting on workload-optimised silicon as datacentre workloads become sophisticated and varied and the facilities begin to feel the pressure of new trends such as mobility, the internet of things (IoT), cloud computing and big data.
According to the company, the datacentre architecture is under pressure and enterprise datacentre needs to be redesigned as we enter the “era of analytics”.
“Twenty years ago, datacentres were built for monolithic workloads,” said Alan Priestly, Intel's Europe director of big data analytics at the company’s datacentre and IoT innovation event in London on Wednesday. “But today, workloads are more fragmented.”
The use of more specific hardware through different central processing units (CPUs) has been growing. Some time back, IBM looked at using its Cell CPUs as a workload acceleration platform, but backed off. Azul Systems has long provided specialised systems for running Java-based workloads. Nvidia and ATI Technologies provide GPU-based systems for specialised workloads. Intel has its Atom low-end processor; ARM has its Cortex.
As the world moves over to converged systems, the use of such off-load engines based on specialised silicon may provide a competitive edge. However, software will be required that understands in real time what is needed by the different workloads thrown at a platform, and how to best deal with these.
Again, IBM has shown promise here. Its zEnterprise platform provides a mix of mainframe and Power capabilities (and a capability to add x86 blades) and has software it terms as a “universal resource manager” (URM) that intelligently deals with different workload requirements. The concept is carried through into IBM’s PureFlex systems – but the impact of the Lenovo x86 sell-off on this has yet to be seen.
Outside of the hardware suppliers, a few independent software providers are active in the market. CA Technologies is in the middle of a portfolio rationalisation, and it has made strong acquisitions along the way – for example, 3Tera, Nimsoft, Nolio, Hyperformix – that puts it in a very strong position when it comes to managing workloads in an intelligent manner. Likewise, BMC is working hard to place itself in contention. Since taking itself private in 2013, it has re-focused on R&D and is also rationalising its portfolio and re-engineering its back-end systems to be more hybrid-cloud-friendly.
For the IT professional, workload management is now an imperative. Whether it is a case of managing different workloads against different platforms, ensuring that a workload is on the right part of a hybrid cloud platform at the right time, or that DevOps activities are suitably automated, managing workloads manually is no longer an option.
To ensure that the right system is chosen, it is necessary to ensure that a supplier has the vision for the future.
Workload automation checklist
Good workload automation helps businesses align their processes effectively with those customer expectations.
Check on the following when looking at workload automation systems:
- Make sure it is intelligent: Workload automation can be an abused term. Just being able to automate simple IT processes is not enough – the system must have the intelligence to understand what the workload requires, and what resources are available to apply to it. It must be able to choose the right resources – and understand all the possible outcomes this could have up and downstream to other workloads;
- Support for all your current and foreseeable platforms: For example, if you have a mainframe, then IBM, CA and BMC are your main options. If you are looking at GPU support, make sure these are included in the software tools;
- Support for physical and virtual platforms: Bear in mind that virtual systems depend on physical ones. Any chosen system must be able to identify where a root problem lies and use automation to ensure that business continuity is maintained wherever possible;
- Hybrid private/public cloud capabilities: It is more than likely that your long-term platform will end up being a mix of cloud-based platforms (along with some virtualised and physical systems). Make sure your chosen system can adequately manage workloads across such heterogeneity;
- Ease of use: The move is toward a far more self-service, business-oriented technical environment. Tools should hide as much of the complexity from the user as possible – and should also enable meaningful reports back into the business as to what has happened, what is happening and what is likely to happen in the future;
- Cross-functional capabilities: Workload automation is not just about helping the datacentre professionals. It has to embrace the development and test teams to fit in with DevOps approaches. It needs to fit in with help desk systems, bring your own device (BYOD) mobility and self-service. IT staff need to look at what value workload automation brings to the business – not just to themselves.
Workload automation is a critical part of making sure that a modern IT platform supports the business effectively. The supplier community has woken up to this and their product suites are changing rapidly to meet more needs. But it is still down to the IT professional to choose the right option for the business.