nobeastsofierce - Fotolia

Tips for buying hyper-converged systems

Hyper-converged infrastructure has become a commonly used term. But what is it? And is it necessary for your organisation?

This article can also be found in the Premium Editorial Download: Computer Weekly: Tips for buying hyper-converged systems:

Hyper-converged infrastructure (HCI) is a concept that is already somewhat tarnished. Many storage suppliers are touting systems as being hyper-converged, when they should just be described as converged.

Worse still, many suppliers in the storage and server business are using hyper-converged to describe approaches to both storage and combined server, storage and network platforms.

This means trying to make sense of what is being referred to can be problematic. For simplicity, hyper-converged systems can be described by two phrases.

First, a converged system brings together all the hardware and software required for a single task – in the case of storage suppliers, this is the storage and management of data. As such, a converged storage appliance may still have a server chip in it, memory, network ports and storage systems, but you would not be able (nor would you want) to install and run, say, Microsoft Exchange on it.

Second, a hyper-converged system brings together all the hardware and software required for multiple tasks – a shared environment that runs workloads that still use data storage, but are not just focused on the storage environment. As such, alongside all the software that allows the platform to run, users can install and run applications and services that carry out a business purpose, such as enterprise resource planning (ERP), customer relationship management (CRM), big data analytics, and so on.

To muddy the waters a little more, many hyper-converged systems, while fully capable of running multiple workloads, are targeted at specific workloads – particularly those such as virtual desktop infrastructure (VDI) or big data analysis.

Hardware meets software

It’s a confusing world we live in – but let’s try to clear the waters.

A hyper-converged platform must start with the hardware. Servers, storage and networking need to be brought together in a single environment. Arguably, the first hyper-converged platform was the mainframe, but the first real offering in the Intel space was the Unified Computing System (UCS) platform from Cisco, launched in 2009.

This was based on earlier approaches of creating engineered systems using “bricks”, where hardware components could be built up within a specialised chassis to create a computing system. However, most brick-based systems were still based on the use of storage area networks (SANs), so missing out on the advantages of having storage components close to the server.

By providing a chassis approach to configure various hardware components, several enhancements to the engineering could be applied. For example, whereas a standard scale-out approach uses Ethernet to create an IT platform built from different server and storage components, hyper-converged systems can use proprietary connections within the overall system.

With little or no need for the internal systems to be highly standardised, hyper-converged systems can be highly tuned to maximise performance. Only when connectivity is required between the hyper-converged system and the rest of the world do standards become necessary.

As Cisco’s UCS started to show promise, other providers brought offerings to market, such as VMware Vblocks, Dell FX2, HP Hyper-converged and IBM PureSystems. The majority of first-generation HCI systems used proprietary firmware and software to support standardised operating systems running on them.

Adding value

A new group of software-focused suppliers came to market to provide hyper-converged operating systems that added more value through advanced functions, such as virtual machine (VM) and/or container management, alongside data enhancements such as global data deduplication and compression.

Some of these suppliers provided both software that could be installed on other suppliers’ hardware and a complete hardware-plus-software system of their own; others just the software.

The main players in this area have been Nutanix and SimpliVity. Since being acquired by HPE, SimpliVity is no longer seen as being the agnostic player it once was. Nutanix is making the most of this by providing a supported version of its software on HPE hardware – much to HPE’s annoyance. Nutanix has partnerships with Dell EMC and Lenovo.

Another supplier is Pivot3, which has developed its own software stacks – vSTAC and Acuity – that enable advanced policy-based data management to optimise mixed workloads across HCI platforms. Pivot3 states that it allows greater flexibility in choice of hardware allowed for its system.

This is both a benefit and a drawback: being able to apply HCI software against existing hardware provides obvious cost savings in not having to purchase new hardware, but the lack of highly engineered interconnects and internal data buses counts against gaining the highest possible performance from the resulting platform.

Further changes in approach and various acquisitions later, VMware has taken a more software-based approach with its vSAN ReadyNode, Dell EMC has moved to bring together Dell and EMC’s approaches via its VxRail Appliances, HPE has acquired SimpliVity to bolster its offerings, and IBM has pretty much left the market, moving instead to its cloud-first SoftLayer model. Dell was also a Nutanix partner, and this has continued through into the Dell EMC environment via Dell EMC’s XC platform.

What to look for

The first check a buyer must carry out when considering HCI is to ensure that the platform is as future-proof as possible. Check that the system is guaranteed to be supported for the viable life of the platform. Ensure the supplier is still going to make building blocks in the form of incremental server, storage and network capabilities available. Ensure the supplier has a good story on supporting emerging technology – for example, will the system be able to support server-side NVMe PCI or DIMM storage? The last thing you need is a system that costs a lot to acquire and then cannot provide ongoing flexibility in the future.

Next, check on the software capabilities. Even though HCI is based on an engineered system bringing different resources together, it is unlikely to be the only system your company will use. Management, orchestration, monitoring and other capabilities in the software must be able to traverse beyond the box itself and integrate seamlessly with other systems – whether they be physical, virtual, private or public cloud-based. Accept that the innards of the HCI system itself may be proprietary to a degree, but ensure that the system supports de facto standards where the rest of the world is concerned.

Ask for references where the system has already been used for workloads similar to those your organisation will run on it. Ask if the mix of workloads is viable, or if it will result in sub-optimal performance. As with pretty much anything, trying to design something to be all things to all people will result in it failing somewhere in its main goals. For example, a system that has been pre-tuned to run VDI loads is unlikely to be good at big data analysis. One that is targeted at supporting high-throughput transactions may not be so good for large file management, for example, as used in design work.

Choose carefully

The HCI market is still maturing, so carrying out due diligence on a specific supplier’s performance to date may not throw up anything useful. As already discussed, the main hardware providers are already on their second or third attempt at getting it right; some of the software providers have disappeared or been acquired. The best thing to do is to ask around – find those with a similar workload or workload mix to your organisation and ask whether they have had any success with an HCI platform.

Overall, HCI does have a role to play. It provides a simpler means of acquiring a total platform that is faster to provision and easier to manage. HCI can perform better than platforms built from disparate discrete components. A well-engineered system provides flexibility for adding incremental resources as required without downtime.

However, a poorly chosen HCI system could end up being a white elephant – one that cannot be upgraded or that does not interoperate well with the rest of the IT platform.

Choose carefully – or choose based on tactical needs, understanding that the platform you choose may not have a significant useful life as part of your organisation’s overall IT platform.

Read more about Hyper-converged systems

This was last published in June 2017

CW+

Features

Enjoy the benefits of CW+ membership, learn more and join.

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchCIO

SearchSecurity

SearchNetworking

SearchDataCenter

SearchDataManagement

Close