JNT Visual - Fotolia

Flash: it’s not all over yet

We look at the need for new approaches to storage as ever-increasing demands are placed on all-flash arrays

It only seems a short while back that the incumbent storage suppliers were pouring scorn on the new kids on the block, the all-flash array (AFA) suppliers.

Now, Dell EMC, HPE, IBM, Hitachi and NetApp all offer AFAs alongside the young pretenders such as Pure Storage, Kaminario, Tintri and Tegile.

During this tumultuous time, we have also seen what looked like good solid companies being acquired (SolidFire by NetApp and XtremeIO by EMC, for example), while others have faltered due to having great technical architectures that have proved difficult to move with the times (such as Violin Memory).

Quocirca remembers discussions with the early AFA suppliers around what the future for flash would be.

Several of them did not see any need for any form of performance tiering in flash storage systems, because the difference in performance between flash and spinning disk was so startling that tiering did not make sense.

When it was pointed out to these suppliers that the same was said when the move from tape to spinning disk started, looks of confusion were the frequent response.

Not the be-all and end-all

Even in the early days of AFAs, it was apparent that this was not going to be the be-all and end-all of the storage wars. Violin Memory tried to redefine the concept of the “disk”, replacing it with Violin Inline Memory Modules (VIMMs). This was a great architecture, but without gaining traction from third-party original equipment manufacturers (OEMs), it has been unable to follow the falling prices of 3.5in and 2.5in standard solid state disks (SSDs) as used by the other suppliers.

The result of the rapid “dash to flash”, though, has been effectively raising the bar. As more primary workloads have moved to AFAs, the competitive edge in performance gained by the early adopters is beginning to fade. Suppliers are finding it difficult to squeeze an appreciable amount of further performance from their AFAs.

For example, consider a spinning disk array with a data latency of, say, 40ms. Sounds quick, but as disk speeds became quicker, suppliers managed to squeeze this down to, say, 20ms – a 50% improvement. Combine this with better read/write performance as well, and input/output operations per second (IOPS) moved significantly.

Read more about next generation storage

Hyper-converged infrastructure maker Scale Computing is set to introduce NVMe connectivity to its products in 2017 with tiers of all flash also on the roadmap.

Hardware giant HPE buys Nimble Storage for $1.2bn and plugs a gap below its high-end flash retro-fitted 3PAR arrays with ground-up solid state-based hardware.

Latency in microseconds

Now, look at the same performance with an all-flash array. Latency moves down into the microseconds. Assume that the first AFA you purchase has a data latency of 400µs. That original possible improvement in performance by two factors (40ms to 400µs) is more than any storage administrator has seen in living memory.

Then, if the suppliers improve the AFA performance from 400 to 200µs, it is still a 50% improvement, but may be constrained more by the storage interconnects and the server capabilities to deal with such a fast data stream.

Plus, making that leap is not as easy as it is with spinning disk. There are no mechanical bits to fiddle with to optimise performance; any major improvements at the flash substrate level require a new substrate fabrication plant – and building that is more expensive than a new magnetic disk plant.

Yet the demand for faster systems is still there. To this end, new storage approaches are having to be investigated. Sure, advances in substrates are still going ahead: the move to 3D-Nand memory has driven higher storage densities and better performance to a degree, and the promise
of Intel/Micron’s 3D XPoint/Optane memory will drive even higher performance.

Struggling to keep pace

The constraints of a disk-based array are showing, though. iSCSI, Ethernet, Fibre Channel and even InfiniBand are struggling to keep pace with the amount of data that a well-engineered, high-speed 3D-Nand array can serve to the network.

How to get around this? Server-side storage. Here, non-volatile memory is used to store data as close to the CPU as possible, using PCIe cards, M.2 systems or even DIMM slots. Already, converged and hyperconverged infrastructure (HCI) systems are moving to use such storage, “hidden” within the engineered system itself.

The first systems used spare PCIe slots in servers to hold specialised cards containing non-volatile memory (NVM). Data was stored and retrieved from these cards at computer bus speeds, rather than at LAN speeds. Suppliers such as Intel, OCZ, FusionIO and Texas Memory Systems (TMS) were earlier pioneers in this space. FusionIO was acquired by SanDisk in 2014; TMS was acquired by IBM in 2012.

But there were problems with the early systems. A SAN or NAS storage array can have multiple redundancy approaches built into it, ranging from RAID or erasure code to direct data mirroring across systems. PCIe storage meant that all the data eggs were in one basket – if there was a failure of the server, all the data on that server at the time could also be lost.

Suppliers such as Pernixdata tried to deal with the issue with a supplier-agnostic software-based approach. However, Pernixdata was acquired by software and hardware enterprise cloud supplier Nutanix in 2016, being subsumed into Nutanix’s software suite.

As the need for data speed has increased, more firms are working on how to optimise the use of server-side storage. Alongside those trying to figure out how to deal with the weak links of such storage are those who have taken a slightly different approach.

One example is Diablo Technologies, which has taken server side storage to its current logical limit. It has developed non-volatile DIMMs (NVDIMMs) in conjunction with driving standardisation with industry bodies JEDEC and SNIA. Diablo uses the NVDIMMS as byte-addressable non-persistent storage – it recognises the difficulties of providing a completely fault-tolerant server-side flash storage system, opting to provide the best possible data manipulation and analysis speeds instead.

But this could change. The non-uniform memory access (NUMA) architecture of a highly parallelised NVDIMM storage system does lend itself to the possibility of a highly efficient, erasure code style of data resilience across multiple servers connected via high speed interconnects. In other words, high-speed resilient highly available persistent server-side storage may not be that far away.

Also, Diablo says it has abstracted as much of its technology as possible without affecting performance. As such, when (and if) Intel/Micron’s delayed 3D XPoint flash substrate starts to ship in commercial volume, Diablo will be able to embrace the technology and make the most of the extra speed capabilities.

Mix of storage types

Another possible candidate will be the cloud. Increasingly, the likes of AWS, Microsoft Azure and other public clouds are putting in place high-speed data manipulation and analysis systems, using a mix of storage types controlled directly by the cloud provider. As long as the business logic and data storage systems are co-located, this could be a good solution for those struggling to keep pace with the changes in the storage landscape.

Overall, the data speed race is internecine. It will be impossible for any supplier to meet the needs of the user through any persistent storage needs – even if Intel managed to build in terabytes of storage as Level 1 cache in its CPUs. Users want absolute real-time data analysis; even on-chip cache cannot provide this.

With so many different ways of providing non-volatile memory systems, there is a stronger need than ever to provide intelligent data caching and tiering capabilities. For many (such as Diablo), this is where the “secret sauce” comes in.

Nanosecond timescales

With spinning disk storage, the latency of the data meant software could easily outperform the disk and deal with optimising the data within the milliseconds in which the disks operated. Now, as flash deals in micro and even nanosecond timescales, poorly written and implemented caching and tiering software can actually slow down the performance of a system – drastically.

As a storage buyer, accept that there will still be major steps forward in storage performance across SAN, NAS and server-side environments and make sure that what you choose enables you to embrace what could come down the line.

Unless unavoidable, and there are distinct major business benefits in doing so, do not choose systems that will require a forklift upgrade and major change in the infrastructure in the future.

Clive Longbottom is the founder of analyst group Quocirca.

This was last published in March 2017

CW+

Features

Enjoy the benefits of CW+ membership, learn more and join.

Read more on IT operations management and IT support

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchCIO

SearchSecurity

SearchNetworking

SearchDataCenter

SearchDataManagement

Close