SME storage: Will hyper-converged eclipse the entry-level array?

Has the entry-level storage array had its day?

It seems like hyper-converged infrastructure is one of the strongest trends in storage right now, based on the admittedly unscientific basis of what comes across my desk. But there are lots of reasons to think that discrete servers and storage are simply too much of a faff compared to buying it all in one box.

That’s what hyper-converged offers; server and storage in one box, usually with the hypervisor built in and often in a scale-out architecture that can add nodes to create sizeable clusters with a single pool of storage.

For the SME that’s often an ideal proposition. It means that – without a dedicated IT team in-house – they can deploy servers and storage that are pretty much guaranteed to work together right out of the box, often with easy-to-use wizard setup interfaces. It also means there is only one throat to choke with simply no possibility of server, storage and virtualisation vendors able to deflect blame to the other.

Now too, hyper-converged comes with flash options, so there is the possibility of performance-hungry workload support, at least to some extent.

All in all, it has to be said that any SME looking to refresh storage (and servers) right now needs to look at hyper-converged as an option.

That probably comes at a time when many small organisations have recently got onto shared storage and away from servers with storage in one box.

As the wave of virtualisation rolled across the world of IT, servers with direct-attached storage became inadequate to the task of running many virtual machines with DAS an I/O bottleneck.

Now hyper-converged is something of a return to that, but with pooling of storage resources across many nodes that bottleneck should be bypassed.

But are there cases where a discrete shared storage array may still fit the bill, for example if high performance is an over-riding need.

All-flash is really best handled by a dedicated array, or is at least more available as such, although hyper-converged is increasingly available with solid state storage.

Also, organisations that envisage scaling up fairly extensively might favour a shared storage array where that array is part of a product family that can offer bigger, faster options in the same operating environment.

For some also, it might be simply too soon to jump to a newcomer technology.

So, clearly, the question now for SMEs is hyper-converged vs “traditional” servers plus array architecture. The drill-down question within that is what use cases dictate one or the other?

Join the conversation

1 comment

Send me notifications when other members comment.

Please create a username to comment.

Hi Antony, you are right. Apart from a limited number of cases, Hyper-Converged (HC) will be the preferred solution by SMEs for one main reason: simplicity. It has become so main stream, with many vendors, that vendor margins are quite thin and sometimes it is almost as cheap as assembling yourself. 

However I disagree with this statement "So, clearly, the question now for SMEs is hyper-converged vs “traditional” servers plus array architecture". The choice is between HC and Compute servers running SDS, thus doubling as storage (and in practice delivering HC solution). The storage array is phased out, no longer in the picture. 

Boyan @ StorPool Storage
Cancel

-ADS BY GOOGLE

SearchCIO

SearchSecurity

SearchNetworking

SearchDataCenter

SearchDataManagement

Close