WavebreakMediaMicro - Fotolia

It makes sense to go hyperscale/hyper-converged

Hyperscale computing and storage are the norm for web giants. Hyper-converged scenarios make it possible for small and medium-sized enterprises to gain the advantages of combined server/storage nodes

Hyperscale computing is a tempting prospect. Moreover, almost everyone thinks they know the story.

The likes of Amazon, Google, Facebook and Microsoft achieved massive efficiency gains by opting for distributed and loosely-coupled architectures.

Instead of the monolithic, high-availability systems historically favoured in datacentres, they spread storage, computing and networking across hundreds and thousands of commodity server-based nodes.

The hyperscale pioneers even design their own servers, switches and storage nodes along the way. The result is cloud services that can readily scale up as demand increases, seamlessly adding resources as required.

They also gain fault tolerance, using sophisticated error correction techniques to widely distribute data – in effect using sheer scale to make up for not having the more highly tuned but expensive hardware that most enterprises use.

In fact, if they hadn’t gone to hyperscale systems, with their automated provisioning and ability to seamlessly scale up, those cloud service providers would not be able to cope. Indeed, they probably would not have been able to develop and deploy the likes of AWS and Azure in the first place.

Hyperscale comes at a cost

It is – or it has been – a different model of working, where the highest IT budgets allow the application of huge resources to the creation of custom cost-saving solutions that can be rolled out widely.

In contrast, enterprises have traditionally spent to get a more stable packaged solution that does not require PhD-level skills to operate and maintain. In essence, the service providers spend money on people to save money on technology, where for enterprises the calculation has been the other way around.

So, it’s not surprising that relatively few enterprises have successfully made the same leap to hyperscale cloud computing.

Most lack the advanced technical skills – indeed, they don’t have the service volume needed to amortise the cost of those skills – while others have too much invested in legacy apps and approaches. Then there are those who simply aren’t convinced it can work for them at their size.

Even if you look at leveraging the cloud service providers for your move to hyperscale, there is the caveat that, according to a recent study by cloud capacity planning specialist VMTurbo, small and medium-sized organisations have massively underestimated the cost of implementing cloud services.

Control and compliance problems

VMTurbo says there are a number of problems, notably that companies focus too much on the public cloud’s time-to-provision advantages and too little on issues of control, compliance and strategic vision.

In addition, they do not properly understand their existing infrastructure costs so they have no baseline from which to compare.

VMTurbo argues that there are two important things an organisation must do. First, it needs a strategy for its evolving multi-cloud infrastructure, for example to cover what goes where.

Second, it needs not just the automated resource provisioning provided by the cloud orchestration frameworks, but also analytics that can dynamically automate placement, sizing and capacity to achieve the best performance and the most efficient use of resources.

Yet the truth is that going hyper could now pay off for many medium to large organisations – and fortunately, a range of other advances and developments is gradually making this a whole lot easier.

Newer technologies, such as enterprise flash storage and the growing popularity of software-defined architectures such as the Facebook-driven Open Compute project, are combining with converged architecture to make elements of the hyperscale approach a lot less custom and a lot more generally accessible.

The challenges of reorganisation

The challenge is that it is still a big change and involves major reorganisation in IT skills and roles. It is also mostly suitable for new builds (or at least new projects), rather than as a retrofit technology.

In particular, a lot of the efficiency increase happens at the datacentre design stage. For example, you can go chiller-free, switch to free-air cooling and medium-voltage power distribution, and run the datacentre at a higher temperature.

Research shows that raising the datacentre ambient temperature by one degree Celsius cuts the power bill by 2%.

So if you are not mega-scale, and you are not about to build a new datacentre, how can you go hyper?

Going co-lo

The first answer is through someone else, either through one of those big cloud service providers, or perhaps more likely through a co-location facility.

Most co-los have been working very hard to improve their efficiency, not only because customers now look for it but also because it saves them money too.

The second answer is that there is still a lot you could do at your next refresh cycle without going down the greenfield route.

There are modular hyper-converged systems from specialists such as Simplivity and Nutanix and from big vendors such as Hewlett Packard Enterprise (HPE) and Dell, which promise hyperscale in a box (or perhaps more accurately, a series of boxes).

There are also software-based infrastructure platforms, most notably OpenStack, and there are products aimed at solving particularly problematic elements of the scale-out puzzle, such as Infinidat and Scality for hyperscale storage.

And of course there are many products to assist with the provisioning and task placement process, such as VMware VVols or DataCore’s SANsymphony that dynamically manage storage at the virtual machine level.

Meanwhile, VMTurbo can add a capacity planning and analytics layer to the job of provisioning resources within a cloud-type environment.

Start small, lower the cost

Part of the attraction behind hyperscale is that you can start small to minimise the initial investment. Then, as you need more capacity, you add more nodes and the scale-out software seamlessly expands the resource pool.

This is, of course, where the hyper-convergence companies score by offering ease of management and ease of scale-out.

One caveat here is that in many hyperscale models, the maximum amount of compute power and memory available to a task is limited to what is available on the specific node it is running on.

Automated load distribution software can help to some degree by finding the most appropriate node for the task, but this type of hyperscale technology will not suit some compute or memory-intensive tasks. Examples might be large, mission-critical applications covering finance or ERP.

The key is storage

Another is the storage side. If using standard nodes – which is a key part of the simplicity and efficiency argument – the last thing you want to do is add another box with un-needed compute and memory, just to get more storage.

That most likely means you will need a separate tier of storage nodes, which adds complexity.

On the other hand, there is a growing class of applications well suited to hyperscale and hyper-converged environments. Typically these are web-native or cloud-native, are designed to run as distributed or containerised apps and use software-defined approaches for things such as high availability.

In addition, they might well use object storage, making them good candidates for deployment alongside hyperscale storage.

The idea is that running these applications on a hyperscale private cloud, with the right automated provisioning and placement tools in place, should yield the agility and flexibility of the public cloud with the security and compliance of on-site IT. 

The result – if you can overcome the skills gap and manage the accompanying changes within IT – will be a much better fit with the complexity of modern workloads and the demands of web-focused users.

Read more on hyperscale/hyper-converged

  • Pioneered by Facebook and Google, hyperscale computing and storage is built on cheap commodity parts with redundancy at device level.
  • Hyper-converged storage/compute poses a threat to standalone SAN/NAS in the datacentre.
  • CIOs put hyper-convergence tech to the test: Can hyper-converged technology really replace traditional data center design?

Next Steps

What do you know about HCI?

Read more on Server virtualisation platforms and management

CIO
Security
Networking
Data Center
Data Management
Close