Konstantin Emelyanov - Fotolia

NVMe flash storage: What are the deployment options?

NVMe offers to unleash performance potential of flash storage that is held back by spinning disk-era SAS and SATA protocols. We run through the key NVMe deployment options

Persistent storage is an essential requirement of all computer systems. It provides the ability for data to be retained across server reboots where data in memory would normally be lost.

For the past 60 years, we’ve relied on tape and hard drives as the primary location for our data. Neither medium offers the performance of flash storage, however, so we’re seeing a transition to using flash as the storage medium of choice for the datacentre.

Now, we are seeing flash connected by NVMe, which has the potential to unleash the potential of solid-state storage. But what are the deployment options currently? And what obstacles lay in the way of full-featured NVMe-powered storage arrays?

Legacy protocols: SATA, SAS, Fibre Channel; SCSI, AHCI

Storage devices connect to the server internally using physical interfaces such as serial-attached SCSI (SAS) or serial-ATA (SATA) and externally to shared storage using Ethernet or Fibre Channel.

These protocols define the physical transport, while at a higher level, SCSI (via SAS or Fibre channel) and AHCI (for SATA) are used to store and retrieve data.

Both small computer system interfaces (SCSI) and advanced host controller interfaces (AHCI) were designed at a time when standard storage devices (hard drives and tape) were capable of dealing with a single input/output (I/O) request at a time.

Tapes are linear, while hard drives must position a read/write head to access data from a track on disk. As a result, traditional storage protocols were designed to manage a single storage request queue. 

For AHCI this queue has a depth of 32 commands; for SCSI the supported queue depth is much higher but limited by specific protocol implementations, typically anything from 128 to 256.

Looking more closely at hard drives, both protocols use techniques to improve throughput in their queues – such as tagged command queuing (TCQ) or native command queuing (NCQ) that aim to re-order the processing of I/O.

This re-ordering uses on-board cache to buffer requests, while optimising the physical read/write from the physical storage medium. TCQ and NCQ mitigate some of the physical characteristics of accessing a hard drive, generally improving overall throughput, but not latency.

Flash and the protocol bottleneck

As we make the transition to flash, the storage protocol starts to become a bottleneck.

Flash is solid-state storage with no moving parts and can handle many more simultaneous I/O requests than the currently commonplace SAS and SATA connections allow.

But SATA is limited to 6Gbps throughput and SAS 12Gbps, although some performance improvements are on the horizon.

The biggest issue, however, is the ability to process a relatively small number of queues. And as flash drives increase in size, that bottleneck will become more of a problem.

Enter NVMe

To resolve the storage protocol performance problem, the industry introduced a new device interface specification and protocol known as non-volatile memory express (NVMe) that uses the PCI Express (PCIe) bus.

NVMe flash drives are packaged as PCIe expansion cards, 2.5in drives or as small plug-in modules using a format called U.2. The PCIe bus provides very high bandwidth of approximately 984Gbps per lane, so a typical PCIe 3.0 x4 (four lane) device has access to 3.94Gbps of bandwidth.

NVMe also addresses the I/O queue depth issue and supports up to 65,535 queues, each with a queue depth of 65,535 commands. Internally, NVMe addresses some of the parallelism issues of legacy storage protocols by implementing more efficient interrupt processing and reducing the amount of internal command locking.

The benefits of NVMe can be seen by looking at some device specifications.

Intel’s SSD DC P4600, for example, which uses PCIe NVMe 3.1 x4, offers sequential throughput of 3,270Mbps (read) and 2,100Mbps (write) with 694,000 (read) and 228,000 (write) IOPS.

Looking past NAND flash to newer technologies such as Intel’s Optane, we can expect to see even better performance figures once they are fully published. Intel already claims 500,000 IOPS with Optane under a 70/30 read/write workload.

NVMe in the server

There are three ways in which NVMe can be implemented to improve storage performance.

The first is to use NVMe drives in the server.

Obviously, the server needs to support the devices physically and at the BIOS level. The operating system also needs to support NVMe, but pretty much all modern OS versions already support NVMe natively. Platforms such as VMware’s Virtual SAN have supported NVMe for more than 18 months and offer a good performance upgrade path.

NVMe in the array

A second option is for storage suppliers to support NVMe drives in their products.

Today, most storage arrays comprise server hardware with some custom components. To replace SAS drives with NVMe can provide a boost to performance and throughput. Array suppliers have made such changes many times before with, for example, moves from SATA to SAS and Fibre Channel drives as a way of improving performance, resiliency and simplicity. This has also required an upgrade to internal components, such as storage controllers.

Recently, storage array suppliers have started to announce NVMe support for their products. Pure Storage released the //X range of FlashArray products in April 2017 with NVMe support. HPE has announced that the 3PAR platform will provide NVMe support for back-end connectivity. NetApp provides NVMe flash storage as a read cache (Flash Cache) on its latest hardware platform.

The array controller as a bottleneck to NVMe

But the use of NVMe within storage arrays presents a challenge to storage suppliers, in that the array operating system software starts to become the bottleneck within the system.

When hard drives were slow, software could afford to be relatively inefficient, but we’ve started to see suppliers forced to try to adapt to faster storage hardware.

EMC, for example, had to rewrite the code for VNX in 2013 (codenamed MCx) to support multi-processing and as a foundation for faster devices. Also, DataCore introduced a technology called Parallel I/O in 2016 to enable its software-defined storage solutions to take advantage of the increase in performance of server and storage hardware.

A key challenge for storage array suppliers will be to demonstrate the benefits of NVMe, first in terms of being able to exploit the faster devices and second in translating this improvement into a financial benefit. A small incremental performance uptick won’t be enough to convince customers that a move to faster technology is justified.

NVMe-over-Fabrics

A third option for NVMe deployment is to use NVMe-over-Fabrics (NVMf).

This is a way of carrying motherboard/drive-level NVMe commands over longer distances. It is similar to the way SCSI is wrapped in Fibre Channel or Ethernet for physical transport, through the Fibre Channel and iSCSI protocols respectively.

With NVMf, the NVMe protocol is wrapped in remote direct memory access (RDMA) or Fibre Channel, with the former offering physical connectivity over Infiniband or Converged Ethernet. 

These solutions provide two interesting scenarios.

First, today’s Fibre Channel networks could be upgraded to support NVMe and SCSI as storage protocols, providing customers with the choice to re-use existing technology. This may offer a performance improvement, but could still ultimately be limited by the performance of the back-end storage.

The second option is to use NVMf to build a storage fabric that can act as a logical storage array, or to operate in hyper-converged fashion.

Exelero is a startup that has developed an NVMe fabric technology called NVMesh. The company recently partnered with Micron to release SolidScale, a scale-out storage hardware platform. Meanwhile, Apeiron Data Systems is another startup that offers a scale-out storage appliance that delivers NVMe-over-Ethernet.

But these solutions don’t offer all the features of replication, data protection, snapshots and space efficiency we see from traditional storage array products.

Can there be storage arrays with true NVMe benefits?

NVMe-over-fabrics offers a future of high performance and a better use of NVMe technology than can be achieved with traditional storage arrays.

But traditional storage continues to offer the benefits of consolidating resources into a single form factor, with advanced storage features many customers expect in their platforms.

So, currently, for many users, NVMe is perhaps best utilised as direct-attached storage in the server as or as an upgrade to a traditional storage array, as cache, for example.

Meanwhile, IT organisations with large-scale, high-performance needs are the ones most likely to be comfortable with bleeding edge solutions built on NVMf.

The performance benefits of NVMe are clear, as are the advantages that can come from NVMf. The big question is, can the benefits of NVMe be harnessed to the features we have become used to from storage arrays?

Read more about NVMe

This was last published in June 2017

CW+

Features

Enjoy the benefits of CW+ membership, learn more and join.

Read more on Flash storage and solid-state drives (SSDs)

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

Hi Chris,

This is a nice piece, and I think you've done a good job touching the key areas that are related to NVMe and NVMe over Fabrics (the official acronym, btw, is "NVMe-oF").

I think you may be conflating a few things, however. When we are talking about the "ways of doing things," there is more than one way to skin a cat, but there are only two ways of "doing NVMe" from the standards perspective. The beautiful thing about standards, though, is that you can mix-and-match what you need in order to provide the right solution.

NVMe is - first and foremost - a memory-mapped architecture that uses PCIe to connect the CPU with the memory/storage device. This means that you can have a server that has NVMe storage (natively connected through PCIe), or you could have a server that has NVMe storage that *also* runs software to provide that storage to other clients who request it. Whether the system is used as a server, or as an appliance, or as an "array" becomes dependent upon upper layer software that is independent from NVMe and its storage architecture.

Standards-based NVMe can be extended outside of a server in (so far) only two ways: PCIe extension and NVMe over Fabrics. There are, of course, additional transports that are being developed to qualify as "Fabrics", and you are correct that the two ones currently identified are RDMA-based transports and Fibre Channel.

There are solutions available on the market (you've identified a few of them) who have taken some of the standards approach, and used their own IP to provide a type of solution. Apeiron, for instance, uses standard NVMe command sets, but they use their own transport solution (so, not NVMe-oF, and not PCIe-based NVMe) to create a direct-attached Solution. I'm not sure it's correct to imply that they don't have features they never intended to have as a DAS appliance. (I'm only using them as an example).

Your ultimate question - "Can there be storage arrays with true NVMe benefits?" is an odd one to me. It's exactly the same as asking "can there be storage arrays with true SCSI benefits?" and, when examined in that light, illustrates my point. Any resiliency, robustness, data reduction techniques that are used today are done outside of SCSI, so certainly they would be applied outside of NVMe, yes?

You are absolutely correct, however, that the advances in NVMe-based storage devices (the actual storage drives, I mean) have forced many vendors to rethink the impacts on bandwidth, CPU utilization, software efficiencies, all the way down to the most fundamental and basic components.

To that end, we *want* to rethink those aspects of data center technologies that have been "good enough" as long as storage latency was in the milliseconds. What's more, we don't want to rush solutions where unintended consequences could mean data loss. That being said, the sheer number of companies with their hands deep in the technology (over 120+ members in the NVM Express group, for example) is a good sign.

Thanks for bringing this up. I think it's a worthwhile endeavor to try to make this clearer for data center professionals who may not be as keyed into storage on a regular basis.

(Full disclosure: I am a member of the boards of directors for NVM Express, SNIA, and FCIA, but am speaking only on my own behalf. Any errors are mine and mine alone :)

J
Cancel

-ADS BY GOOGLE

SearchCIO

SearchSecurity

SearchNetworking

SearchDataCenter

SearchDataManagement

Close