The pitfalls of hyper-converged infrastructure and how to avoid them

Hyper-converged infrastructure can be an easy win when it comes to datacentre infrastructure upgrades, but there are pitfalls. We discuss how best you can sidestep them

Many organisations are rolling out hyper-converged infrastructure (HCI) to replace their server and storage architectures.

Hyper-converged combines compute, networking and storage into one virtualised system, making it easy to deploy, manage and scale.

But hyper-converged isn’t entirely risk-free. There is a plethora of mistakes that many companies make when buying, sourcing, specifying and implementing such systems. We look at the common problems and how best you can avoid or minimise them.

Many enterprises believe that implementing HCI will be the panacea for all datacentre ills. While it brings with it a degree of flexibility around deployment and provisioning, there are still elements to be managed.

Paul Mercina, head of Innovation at Park Place Technologies, points out that uniting storage, compute, and networking in a single box also means breaking down silos among datacentre staff.

This can cause issues during deployment and long-term management, because it’s not always clear which team should be responsible or where the necessary expertise resides.

“Rather than needing storage specialists, for example, the enterprise may want more generalists with an understanding of the various HCI ‘ingredients’,” says Mercina. “Such a shift in human resources isn’t usually a flip-of-the-switch endeavour, culturally or from a recruitment perspective, especially in a tight job market with a big IT talent gap,” he says.

Not giving storage enough consideration

One of the biggest challenges in hyper-converged is that while it offers a building block approach to storage, compute, and network, the downside is that a certain amount of compute often comes with a certain amount of storage.

“If the enterprise underestimates storage needs, additional nodes can be added but they’ll come at a higher cost because they carry with them additional compute, beyond the organisation’s needs,” says Mercina.

Read more about hyper-converged

This is one reason hyper-converged hasn’t been seen as a good solution for big data analytics or artificial intelligence applications, where the compute-to-storage ratio and scaling demands are unbalanced.

This has led to some vendors disaggregating their HCI offerings, so that compute and storage can be scaled independently to make hyper-converged applicable to a wider variety of use cases.

Sometimes it is possible to specify HCI nodes as either storage or compute. Storage nodes have central processing unit (CPU) and sufficient to move data, while compute nodes have just enough storage capacity to support local workloads.

Misjudging network needs

Latency could increase if hyper-converged runs on networks with poor performance. This means that enterprises should invest in enhancing networking to make the best of their HCI deployments in high-performance clusters.

There are applications that just cannot cope with the architecture of hyper-converged. For example, when an application has a requirement for very high local disk I/O or network throughput.

“In the past, critical apps and databases posed a bit of a challenge in a hyper-converged environment,” says Ezat Dayeh, SE Manager at Cohesity. “This was the challenge of restricted I/O and bottlenecks when scaling up and maximising hyperconverged nodes to meet Tier 1 demands for performance and processing.”

Dayeh points out that this doesn’t have to be the case. With intelligent placement of data within the cluster and even distribution across all nodes within it, you can avoid I/O bottlenecks.

What to consider when scaling up

Organisations need to avoid any disruption that might result from scaling up in the future, so deployment of an HCI infrastructure should not be based on current needs.

It is best to think about the needs of the organisation three to four years in the future. This means using tech that can run many applications, understanding what those future workload requirements might be, and anticipating future growth in data.

Hard or soft HCI: Which to choose?

Many businesses buy hyper-converged in a hardware appliance format, but is that the best way?

The choice is down to what issues enterprises are trying to solve. Hardware can be easier to deploy and if accelerated deployment is the goal, then hardware makes most sense.

But software-based HCI can be more flexible, allowing the businesses to change hardware vendors whenever they need to. But the DIY facet of this makes deployment more labour intensive.

Avoiding supplier lock-in

Buying a supplier’s hardware and software together can lead to the problem of vendor lock-in, which may put a brake on innovation. Hyper-V and VMware comprise the bulk of HCI architectures, but storage software can also pose lock-in issues.

“When pursuing the software route by purchasing a hyper-converged solution to run on your own hardware, you’re likely buying proprietary storage software which can tie you to the vendor,” says Mercina.

He adds that open source hyper-converged infrastructure offers an alternative. “Some organisations may find this appealing, but others will find they lose the very simplicity that had them looking at HCI in the place.”

Multiple suppliers and HCI

One of the big reasons for moving to HCI is consolidation. To remove the storage tier and combine it with the compute layer brings cost benefits and takes up less space. But bringing in yet another vendor because you want a particular HCI system can add complexity.

A lot of datacentre managers become very frustrated by support complexity, when each OEM has a different support process, different warranty coverage and fine print, etc. There is also a buck-passing mentality when equipment from multiple vendors is involved in an issue.

“Customers need to keep in mind not just the deployment phase but also maintenance and whether they want to add players into that mix as well,” says Mercina.

Considering the whole SDDC stack

Hyper-converged originally came to market with the promise it would remove management overheads. Networks have traditionally been pretty static and require a lot of feeding from a maintenance and management perspective. What HCI has done is to help open a world of software-defined networking (SDN).

Andrew McDade, business lead for software defined compute for HPE in UK and Ireland, says that businesses want the management benefits that can be obtained from HCI for compute and storage, so it would be “crazy” not to consider the whole stack.

“Software defined networking has come on a great deal in the past few years, giving you the simplicity of templated, automation and orchestration at a network level. With open APIs and infrastructure as code the timing couldn’t be better to look at your network strategy and how SDN could play a key role in your future IT agility,” he says.

Look before you leap

Best practice when it comes to deploying HCI is to carry out due diligence and carry out proofs-of-concept to see if the hyper-converged is what you need. HCI doesn’t fix all problems.  

Other solutions exist, such as converged infrastructure – the precursor to hyper-converged infrastructure – and also composable infrastructure, which can provide more flexible resource allocation and scalability. This latter option could be best for some enterprises that want to bypass HCI and head towards a software-defined datacentre future. 

Read more on Converged infrastructure

CIO
Security
Networking
Data Center
Data Management
Close