Fotolia

Hybrid flash vs all-flash storage: When is some flash not enough?

When it comes to choosing between hybrid flash and all-flash storage, the question is increasingly not how much flash is enough, but whether you still need any disk at all

Let's start by taking a brief look at the available options. Much has been written in these pages about the growing acceptance and usefulness of flash storage within enterprise-grade arrays. Initially, flash-based solid-state drives (SSDs) were expensive, so the fastest and least costly way to take advantage of them was to retrofit them to an existing disk-based array – as a tier zero, for example.

As prices fell, tier one flash also became an option, especially for those with auto-tiering technology that could automatically move the hottest data onto the fastest tier. However, this still left the array optimised for spinning disks – although this is changing as suppliers update their array software – so other developers realised they could instead design an array optimised for flash, with secondary tiers of cheaper spinning disks for longer-term data.

Remember here that flash is not just a faster version of spinning disk – it is a fundamentally different medium. Yes, it can perform the same tasks as a disk drive, but it works differently, so if your array firmware and your applications continue to address it as a disk you will be wasting much of the advantage available to you. For instance, applications may buffer writes to cope with disk latency, but your developers now need not factor that into things.

Lastly, there are the all-flash arrays with no spinning disks at all. Initially these – just like their DRAM-based forerunners from Digital Equipment, Texas Memory Systems and others – targeted the most performance-hungry and latency-sensitive applications where cost was much less of an issue, but as time went by and flash costs fell further, the all-flash approach started to make sense for a much broader spread of enterprise applications.

This last process was greatly assisted by the adoption of denser and therefore cheaper flash technologies such as multi-level cell (MLC) and triple-level cell (TLC), which store two and three bits per cell respectively, and have enabled the creation of capacity and performance-optimised variants of flash. The denser chips are less reliable, but as in several other areas of technology, we can use software to more than compensate for this.

Another factor that made all-flash arrays easier to adopt was that while they initially lacked the sophisticated storage management capabilities of the established disk arrays, this is no longer the case. Some may even exceed the capabilities of disk arrays, especially in areas where flash excels, such as continuous data protection (CDP), guaranteed quality of service and the creation of capacity-free snapshots.

Analyse the workload

So why would you still want a hybrid array, and why would you choose a purpose-designed hybrid over a retrofitted one? To start answering those questions you must analyse the workload. The more random it is – and virtual desktop infrastructure (VDI) is highly random, for example – then the more appropriate all-flash will be.

Customer-facing online transaction systems and databases are also sweetspots for all-flash, as is virtualisation in general. Sequential workloads and those with large proportions of cold data are a different matter. Essentially though, what we used to call cache hits and misses still matter, and with all-flash everything can be a cache hit.

With raw flash capacity costing an average of perhaps $5 per GB, hybrid arrays also tend to be cheaper, but cost comparisons between disk and flash are nuanced. For instance, flash is more compact and less power-hungry, so what you win on the purchase price of a hybrid you might lose on its operating costs and the space it takes up.

Be aware too that while some all-flash suppliers are already claiming price parity, this is hard to assess because of the effects of the data reduction technologies used to get more data into a given volume of flash. These technologies – the primary ones being data deduplication and compression – can reduce the effective price per GB by a factor of 5:1 or more (to $1 per GB or less), but their performance can vary considerably depending on the type of data.

Read more about flash storage

As an aside, this is why any decent flash-enabled array needs to include a variety of data reduction tools. For example, data deduplication will work much better on virtual machine images or some file-based tasks than on databases, while compression will work better on databases than for photos or videos. In addition, some arrays allow you to turn these features off on a per-LUN basis – look out for this if you plan to consolidate multiple applications onto a flash array, as your most latency-sensitive applications may need data reduction turned off.

The advantage of purpose-designed hybrids over retrofits is primarily that the former should accelerate all I/O, not just the data placed on the flash tier. However, this is increasingly true of all hybrid arrays, thanks to improved data management software which automatically promotes the hottest data to flash. Indeed, with some systems you must actively pin data to the flash tier if you want it to stay there regardless of its “temperature”.

Many hybrids use flash as a read-only cache, while others use it as a write cache too, in part to speed up access to slower hard drives. Writes may be cached in RAM until they can be written as an entire flash page, or as sequential input/output (I/O).

Interoperability with existing infrastructure

The question is complicated further by whether or not the new all-flash or hybrid array must interoperate with an existing storage infrastructure. For example, if your current disk-based NetApp filers are coming off maintenance then you could consider going all-flash, otherwise you will want at least some interoperability. Adding flash to create a hybrid may be preferred; alternatively if your filers are clustered, check whether you can add an all-flash unit to the cluster, in effect turning the whole thing into a hybrid.

Scalability and the upgrade path is also important. How granular are the all-flash options and will you end up paying for a bigger system than you need, and will it need a forklift upgrade when it reaches capacity?

In summary, there is little argument now that the first tier should be flash. Major storage suppliers already report that shipments of 15,000rpm disks have almost entirely been replaced by SSDs. The question is what do we choose for the second (and perhaps third and fourth) tiers – disk, or more flash?

Hard disk could be better for some tasks, such as media streaming, and hard disk technology is still evolving, but its properties change as it does so. For example, the latest high-capacity shingled media is good for reading, but difficult to write to and delete from, making it more of an archive medium.

Flash too is a read-optimised technology that presents additional challenges when you need to write to it, such as the requirement to erase used space before rewriting to it. And while flash is currently moving to denser yet more reliable 3D cell structures, current NAND flash technologies may only have a couple of generations left. There are several more non-volatile technologies evolving in the wings though, such as magnetoresistive RAM, ferroelectric RAM and phase-change RAM.

Flash and its successors will therefore migrate to more and more tiers. Disk will still have its uses and advantages, but like tape before it, they will become ever more narrowly defined.

Next Steps

EMC flash storage talked about by EMC engineer 

This was last published in July 2015

CW+

Features

Enjoy the benefits of CW+ membership, learn more and join.

Read more on Flash storage and solid-state drives (SSDs)

Join the conversation

4 comments

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

Nice article. We're definitely transitioning here, with our first hybrid drives now in a few test machines. So far the preliminary results have been impressive and we've seen a zero failure rate.

Once everyone from the back office to the front is comfortable with the idea, especially with flash prices in a steady decline, I suspect we'll begin replacing drives, if not wholesale, at least as their replacement rotation (sic) comes up.

Despite the obvious benefits and the maturity of the technology, we move slowly. There's still a stockroom where we keep our floppies and a few spare drive.
Cancel
Great article. I would like to add that the fast decline of the cost of flash is currently driven primarily by 3D NAND and triple-level cell (TLC) NAND. These approaches guarantee that the density of SSDs will continue to grow rapidly, and the cost of flash media will continue to drop rapidly, for the next five to seven years. Contemporary flash technology enhancements driven by 3D and TLC NAND bring down the cost of SSD storage. A key element is the ability of the all-flash array technology to deploy the high-density SSDs within the same architecture and allow the business to scale the storage systems in a cost-effective manner. This also means that products can take advantage of the most optimized hardware elements as they become available and scale in the most cost efficient way. In practice we will soon see such solutions with average effective cost that is below $1 /GB. This is in practice more cost efficient than hybrid arrays and customer gain the benefit of not compromising on performance consistency (which is the main challenge of any Hybrid array product). The market is going all-flash for all active data and is quickly becoming the most performing and cost efficient option so there are no more reasons to compromise.
Cancel
I'm not sure that 3D NAND and even TLC have had that much of an impact yet--they're both just becoming available. But, I do agree, that they will definitely help bring down flash prices. The key tech development over the last year or so is smaller die size--e.g., 15 or 16 nanometer dies--which allows the solid-storage makers to put more capacity in a smaller space--SSD an PCIe flash capacities have soared over the last 18 months or so..

And I agree--very good article, Bryan--thanks.
Cancel
Rich,

It is great to see that we are all in agreement that the flash cost drops very fast and this makes all-flash arrays very cost efficient for active data. We also see that some of the hybrid solutions are trying to covert to an all-flash solution since they also understand this.

Regarding 3D and TLC I must say that it is already a great impact on all-flash array cost. We in Kaminario are already utilizing 3D MLC SSDs and later this year we will start deliver systems with 3D TLC SSDs. SSD Vendors such as Samsung are delivering today 3D TLC SSDs with better price and endurance compare to 16nm MLC SSDs. This is dramatic to all-flash array adoption.
Cancel

-ADS BY GOOGLE

SearchCIO

SearchSecurity

SearchNetworking

SearchDataCenter

SearchDataManagement

Close