Hyper-converged systems have really taken off over the past few years. According to last year’s ComputerWeekly/TechTarget IT Priorities survey, the proportion of respondents that to deploy hyper-converged infrastructure (HCI) had nearly trebled year-on-year, from 9% to 22%.
Many suppliers have embraced the hyper-convergence wave with offerings to suit a range of budgets, but before any organisations plunge into the world of HCI, they need to know what it is and, more importantly, whether it is right for them.
While most organisations have historically built out their own compute, storage and networking, HCI puts all three together in the hope of taking some pain out of making different bits of equipment play nicely with each other.
It does this by bringing together these IT silos in one box and allowing IT administrators a single management user interface. IT can then put these nodes together like children’s building blocks to create clusters.
Clusters offer compute resources for the application layer, along with shared or distributed storage to meet storage requirements. Software-defined architecture enables local storage on multiple compute nodes to be used as a shared storage resource. Hypervisors enable users to control virtual machines on a compute platform that provides software-defined shared storage.
There is also a high degree of automation that allows management of resources within and across datacentres.
Among the benefits of HCI is operational ease. Having three technology stacks on a single platform reduces complexity, eliminates the need to evaluate storage arrays, standardises compute nodes for ease of maintenance and scales as needed, which helps avoid over-provisioning.
As mentioned above, there is a single interface to provision, deploy and manage compute, storage and networking.
Hyper-converged infrastructure combines compute, networking and storage into a single physical appliance, eliminating the costly and time-consuming deployments that involve multiple teams coordinating compute servers, network gear and storage arrays. It doesn’t need teams to connect hardware, validate compatibilities, or struggle with difficult configurations and modifications for organisation-specific needs.
As it runs as a single platform and storage is available in the hypervisor, it is easy to add capacity and manage all resources for an application.
For motor-racing team Ducati, a major benefit in using an HCI solution from NetApp was that it provided the speed and capacity needed to not only manage branch operations, but also to perform data analysis on-site during tests, free practices, qualifying and the race, according to Stefano Rendina, IT manager at Ducati Corse.
“For example, we can now use all our time on the track to check the data of each motorcycle, produced during the session, and then use different types of algorithms to decide which is the best solution to adopt for the setup of the motorbike,” he says.
Close behind are offerings from Cisco with its HyperFlex platform, HPE with Simplivity, and NetApp with NetApp HCI. There are also a number of smaller niche players such as Pivot3 and HyperGrid.
Why not SAN or NAS?
In a traditional datacentre, there is a three-tier architecture with the hosts on the tier, storage switches or SAN switches on the second tier, and SAN and NAS array controllers on the third tier.
This architecture allows centralised consolidation storage resources at block and file levels, while provisioning and management happen at a single place with a storage controller.
In contrast, hyper-converged infrastructure comprises servers armed with direct-attached storage (DAS), so there are no separate storage switches or SANs connected to hosts. Having said that, many HCI products can present themselves as storage to other servers.
Shared storage benefits
Shared storage has been the solution for high-availability and high-performance applications. NAS has long provided simple, inexpensive file sharing across an enterprise. SAN is more robust, for applications such as transaction-intensive databases, and when SAN is software-defined, it can lower costs, increase reliability and scale more easily.
“In comparison to HCI, shared storage has retained the top spot for performance, workload combination, automation and configuration flexibility. This is changing as manufacturers target enterprise-scale, hyper-converged and cloud infrastructure,” says Paul Mercina, head of innovation at third-party maintenance company Park Place Technologies.
He adds that when using shared storage, one can add storage capacity without adding compute power. “In an HCI scenario, on the other hand, storage and compute come ‘bundled’, so adding storage means adding compute as part of a new appliance. Some suppliers are now offering disaggregated HCI to overcome this barrier,” he says.
Why choose HCI over shared storage?
Originally, hyper-converged infrastructure offered a “building block” of compute, software-defined storage and virtualised networking. The main advantage of HCI over traditional SAN or NAS storage is that it can be increased in comparatively small amounts, rather than having to buy a NAS or SAN system and wait for it to be filled, which can take time.
Adrian Davies, IT operations manager at Stockport Council, says using HCI, such as the one recently installed at the local authority, is a “totally different landscape to how it worked with an old three-tiered SAN, where you would be really worried about upgrading it”. Which, he adds, would have been “a nightmare”.
The system the council has installed is from Nutanix and upgrades on the fly. “It’s more like having an iPhone, where you just get the latest version of iOS and it brings a lot of new features and functionality,” says Davies.
The decision point
The market for HCI – as well as SAN and NAS – is in flux, making the decision point of which one to choose more complicated.
According to Peter Smith, senior senior systems engineer at Tintri by DDN, the lines between HCI and SAN/NAS will always be blurred and a lot of the decision-making process will come down to personal preference.
“But, in most use cases, HCI still operates best within the SME [small and medium-sized enterprise] space, while traditional three-tier architecture based on SAN and NAS offers more flexibility within the larger enterprise-scale organisations,” he says.
Park Place Technologies’ Mercina says some suppliers are disaggregating HCI, so compute and storage can be scaled independently.
“In this case, a storage node has CPU and enough to move data, and a compute node only has enough storage capacity to support local workloads. The nodes still function as HCI ‘building blocks’, but when scaling, one can add storage nodes and/or compute nodes as required. This form HCI is more complex to deploy,” he says.
He adds that suppliers’ performance improvements could lead larger enterprises to tap HCI for a greater variety use cases, edging out SAN.
“Disaggregation could also play a role, as larger enterprises won’t generally be stymied by the modestly increased complexity or the minimum node requirements, but will enjoy the added flexibility when scaling. The most exciting developments, however, may be in composable infrastructure – the next step beyond HCI – often summarised as infrastructure as code,” he says.
It is important to realise that there is no competition between all these technologies as such. Most organisations are best served by a mix of options to serve each workload correctly. Organisations are best placed to decide for themselves which is right for their workloads.
Read more about hyper-converged infrastructure
- What are the advantages of do-it-yourself hyper-converged infrastructure deployment and who are the key suppliers of HCI software? We run the rule over the DIY hyper-convergence market.
- As hyper-converged system revenues get set to overtake those of converged systems, Dell/VMware and Nutanix are dominating the market for combined server and storage nodes.