Spoilt for choice: Enterprise flash deployment options

The question is no longer whether to use flash, but where in the enterprise server and storage infrastructure it should go

There are many reasons flash is the talk of the enterprise storage sector, and some of its emerging use-cases show why it is set to trigger a major shift in the way storage is designed, made and implemented.

Flash is now an everyday technology; over the last few years it has become the storage medium of choice for most end-user computing devices – from smartphones to the tablet to the laptop.

Now IT managers have the opportunity to bring enterprise flash into their own infrastructures, accelerating the performance of their key applications and in the process potentially increasing efficiency through power/cooling and real estate savings.

But, as with most aspects of IT, the enterprise infrastructure is much more complex than the consumer device world and one of the challenges IT managers must face is exactly where they should implement flash. The options are broad, and seemingly increasing all the time.

Flash deployment options

There are four major ways flash is being used in the datacentre. In each category there are refinements and variations, but broadly these are the buckets. As for all engineering solutions, there are advantages and disadvantages to each.

Flash in the server

The first option is to put flash in servers to store primary, working copies of data.

Flash can be installed in the server in two key ways; as solid state drives (SSDs) that replace disks in drive slots. The second is as flash cards on PCIe slots.

Compared with using flash in an external storage system, this gives better performance because data is right there in the server with no external network latencies. Use cases include storing relatively small databases, or index files of larger databases, or operating systems for fast boot, or other permanently hot data.

But, the drive or card can present a single point of failure, and its capacity cannot be easily shared. And if you cannot store an entire database on the server flash, you have to involve a database person to decide what subset or index table to load.

Flash as cache

The next option is to avoid the single point of failure by using flash in a server to store only a cache copy of frequently used data. The primary working copies is still stored in the SAN, where it continues to be protected by snapshots, replication and backups.

Caching software can be used to determine which data to load into flash. One drawback is that cached algorithms are not perfect at identifying hot data. Another is that caching systems are not always simple to deploy, especially in virtualised environments, and capacity cannot be easily shared unless a caching appliance is used, which potentially adds to complexity.

Flash as a storage tier in the array

The third option is to deploy flash in disk arrays. This has become the most common way to use flash in the datacentre, mostly because it is the most straightforward option. Indeed, more than half of the large enterprises we speak with are now deploying flash in their arrays.

This is consistent with what the suppliers are saying. And a little flash goes a long way; installing around 5% of total capacity in a disk array can noticeably improve its performance, especially when used in combination with caching or automatic tiering software. 

In both cases, flash capacity is shared by multiple applications, and the flash deployment is almost transparent to IT administrators, with little or no management overhead. 

One caveat is that latency is not reduced as much as if flash were installed in servers. Also tiering and caching don’t always make the most of the virtues of flash.

Tiering systems only move data periodically, which means data can be stuck on a lower tier of slow disk long after it has is in demand. A few enterprises have reported disappointing results with flash within disk arrays to the 451 Group.

All-flash arrays

The final option is for standalone storage systems, powered entirely by flash – the all flash array. Storage is shared and data protected by conventional snapshots, replication and backup. There is no reliance on caching or tiering algorithms to decide which data to store in flash – instead, it is all stored in flash. 

One drawback is the purchase cost; most all flash arrays are expensive, though prices are dropping, and some suppliers claim to offer all flash arrays at the price-point of traditional midrange storage. Perhaps a bigger issue is that these are, with a few exceptions, very new systems that still have to prove themselves in the market.  

A tyranny of choice

But the decisions don’t just stop there; there are further considerations. For those looking at all flash or hybrid flash systems, one big issue is whether to opt for a “legacy” storage system retro-fitted with flash – which is how most flash is deployed today – or instead one of an emerging breed of next-generation storage systems designed from the ground-up for flash. This is a hot topic in the industry, and tens of millions of dollars is being poured into startups that have developed such flash-optimised systems.

Why? Because taking an existing disk array and replacing disk drives with flash can be a poor engineering solution. Putting flash drives into slots designed to take disk drives can limit performance, because back-end connections designed for disk can throttle throughput on solid state drives. This is not a major issue when using flash with array-based tiering or caching, but when the entire array is powered by flash, it often will be.

Flash will be everywhere

All this explains why we have seen storage suppliers invest in acquiring or developing their own flash-optimised systems. This heralds a new era of storage architecture.

However, when you consider that most major suppliers have – or will soon have – flash-optimised all flash arrays sitting alongside their legacy systems, it also complicates the decision-making process for buyers. We suspect most enterprises will not shift rapidly to these emerging architectures, especially if tiered and hybrid arrays are proving good enough for the time being.

However, we do think that these new systems will increasingly be chosen for new projects around performance-hungry apps, especially VDI, but also things like advanced data analytics.

So how do we see this playing out over the longer term? The reality is that many organisations will end up running flash at all points in the infrastructure, according to where it makes most sense from a performance, latency and price perspective.

This is why we are seeing storage suppliers invest in flash products and technologies across all four of the areas we outline above. We will see suppliers shift to demonstrating how they can best optimise flash resource utilisation across these deployment models; it’s still very early days, but that’s when things will get really interesting.  

Simon Robinson is analyst and vice president at The 451 Group. Tim Stammers is senior analyst at The 451 Group.

Read more on SAN, NAS, solid state, RAID