Konstantin Emelyanov - Fotolia

NVMe flash storage: What are the deployment options?

NVMe offers to unleash performance potential of flash storage that is held back by spinning disk-era SAS and SATA protocols. We run through the key NVMe deployment options

Persistent storage is an essential requirement of all computer systems. It provides the ability for data to be retained across server reboots where data in memory would normally be lost.

For the past 60 years, we’ve relied on tape and hard drives as the primary location for our data. Neither medium offers the performance of flash storage, however, so we’re seeing a transition to using flash as the storage medium of choice for the datacentre.

Now, we are seeing flash connected by NVMe, which has the potential to unleash the potential of solid-state storage. But what are the deployment options currently? And what obstacles lay in the way of full-featured NVMe-powered storage arrays?

Legacy protocols: SATA, SAS, Fibre Channel; SCSI, AHCI

Storage devices connect to the server internally using physical interfaces such as serial-attached SCSI (SAS) or serial-ATA (SATA) and externally to shared storage using Ethernet or Fibre Channel.

These protocols define the physical transport, while at a higher level, SCSI (via SAS or Fibre channel) and AHCI (for SATA) are used to store and retrieve data.

Both small computer system interfaces (SCSI) and advanced host controller interfaces (AHCI) were designed at a time when standard storage devices (hard drives and tape) were capable of dealing with a single input/output (I/O) request at a time.

Tapes are linear, while hard drives must position a read/write head to access data from a track on disk. As a result, traditional storage protocols were designed to manage a single storage request queue. 

For AHCI this queue has a depth of 32 commands; for SCSI the supported queue depth is much higher but limited by specific protocol implementations, typically anything from 128 to 256.

Looking more closely at hard drives, both protocols use techniques to improve throughput in their queues – such as tagged command queuing (TCQ) or native command queuing (NCQ) that aim to re-order the processing of I/O.

This re-ordering uses on-board cache to buffer requests, while optimising the physical read/write from the physical storage medium. TCQ and NCQ mitigate some of the physical characteristics of accessing a hard drive, generally improving overall throughput, but not latency.

Flash and the protocol bottleneck

As we make the transition to flash, the storage protocol starts to become a bottleneck.

Flash is solid-state storage with no moving parts and can handle many more simultaneous I/O requests than the currently commonplace SAS and SATA connections allow.

But SATA is limited to 6Gbps throughput and SAS 12Gbps, although some performance improvements are on the horizon.

The biggest issue, however, is the ability to process a relatively small number of queues. And as flash drives increase in size, that bottleneck will become more of a problem.

Enter NVMe

To resolve the storage protocol performance problem, the industry introduced a new device interface specification and protocol known as non-volatile memory express (NVMe) that uses the PCI Express (PCIe) bus.

NVMe flash drives are packaged as PCIe expansion cards, 2.5in drives or as small plug-in modules using a format called U.2. The PCIe bus provides very high bandwidth of approximately 984Mbps per lane, so a typical PCIe 3.0 x4 (four lane) device has access to 3.94Gbps of bandwidth.

NVMe also addresses the I/O queue depth issue and supports up to 65,535 queues, each with a queue depth of 65,535 commands. Internally, NVMe addresses some of the parallelism issues of legacy storage protocols by implementing more efficient interrupt processing and reducing the amount of internal command locking.

The benefits of NVMe can be seen by looking at some device specifications.

Intel’s SSD DC P4600, for example, which uses PCIe NVMe 3.1 x4, offers sequential throughput of 3,270Mbps (read) and 2,100Mbps (write) with 694,000 (read) and 228,000 (write) IOPS.

Looking past NAND flash to newer technologies such as Intel’s Optane, we can expect to see even better performance figures once they are fully published. Intel already claims 500,000 IOPS with Optane under a 70/30 read/write workload.

NVMe in the server

There are three ways in which NVMe can be implemented to improve storage performance.

The first is to use NVMe drives in the server.

Obviously, the server needs to support the devices physically and at the BIOS level. The operating system also needs to support NVMe, but pretty much all modern OS versions already support NVMe natively. Platforms such as VMware’s Virtual SAN have supported NVMe for more than 18 months and offer a good performance upgrade path.

NVMe in the array

A second option is for storage suppliers to support NVMe drives in their products.

Today, most storage arrays comprise server hardware with some custom components. To replace SAS drives with NVMe can provide a boost to performance and throughput. Array suppliers have made such changes many times before with, for example, moves from SATA to SAS and Fibre Channel drives as a way of improving performance, resiliency and simplicity. This has also required an upgrade to internal components, such as storage controllers.

Recently, storage array suppliers have started to announce NVMe support for their products. Pure Storage released the //X range of FlashArray products in April 2017 with NVMe support. HPE has announced that the 3PAR platform will provide NVMe support for back-end connectivity. NetApp provides NVMe flash storage as a read cache (Flash Cache) on its latest hardware platform.

The array controller as a bottleneck to NVMe

But the use of NVMe within storage arrays presents a challenge to storage suppliers, in that the array operating system software starts to become the bottleneck within the system.

When hard drives were slow, software could afford to be relatively inefficient, but we’ve started to see suppliers forced to try to adapt to faster storage hardware.

EMC, for example, had to rewrite the code for VNX in 2013 (codenamed MCx) to support multi-processing and as a foundation for faster devices. Also, DataCore introduced a technology called Parallel I/O in 2016 to enable its software-defined storage solutions to take advantage of the increase in performance of server and storage hardware.

A key challenge for storage array suppliers will be to demonstrate the benefits of NVMe, first in terms of being able to exploit the faster devices and second in translating this improvement into a financial benefit. A small incremental performance uptick won’t be enough to convince customers that a move to faster technology is justified.

NVMe-over-Fabrics

A third option for NVMe deployment is to use NVMe-over-Fabrics (NVMf).

This is a way of carrying motherboard/drive-level NVMe commands over longer distances. It is similar to the way SCSI is wrapped in Fibre Channel or Ethernet for physical transport, through the Fibre Channel and iSCSI protocols respectively.

With NVMf, the NVMe protocol is wrapped in remote direct memory access (RDMA) or Fibre Channel, with the former offering physical connectivity over Infiniband or Converged Ethernet. 

These solutions provide two interesting scenarios.

First, today’s Fibre Channel networks could be upgraded to support NVMe and SCSI as storage protocols, providing customers with the choice to re-use existing technology. This may offer a performance improvement, but could still ultimately be limited by the performance of the back-end storage.

The second option is to use NVMf to build a storage fabric that can act as a logical storage array, or to operate in hyper-converged fashion.

Exelero is a startup that has developed an NVMe fabric technology called NVMesh. The company recently partnered with Micron to release SolidScale, a scale-out storage hardware platform. Meanwhile, Apeiron Data Systems is another startup that offers a scale-out storage appliance that delivers NVMe-over-Ethernet.

But these solutions don’t offer all the features of replication, data protection, snapshots and space efficiency we see from traditional storage array products.

Can there be storage arrays with true NVMe benefits?

NVMe-over-fabrics offers a future of high performance and a better use of NVMe technology than can be achieved with traditional storage arrays.

But traditional storage continues to offer the benefits of consolidating resources into a single form factor, with advanced storage features many customers expect in their platforms.

So, currently, for many users, NVMe is perhaps best utilised as direct-attached storage in the server as or as an upgrade to a traditional storage array, as cache, for example.

Meanwhile, IT organisations with large-scale, high-performance needs are the ones most likely to be comfortable with bleeding edge solutions built on NVMf.

The performance benefits of NVMe are clear, as are the advantages that can come from NVMf. The big question is, can the benefits of NVMe be harnessed to the features we have become used to from storage arrays?

Read more about NVMe

Read more on Flash storage and solid-state drives (SSDs)

CIO
Security
Networking
Data Center
Data Management
Close