2121fisher - Fotolia

NVMe-over Fabrics: How NVMe-oF revolutionises shared storage

NVMe brought super-fast flash storage to the datacentre, but it needs NVMe-oF to allow it to bring the benefits of rapid access and low latency to SAN and NAS shared storage

It’s almost a decade since the NVM Express Workgroup released the first version of the NVMe standard. Since then, the technology has become an increasingly common interface for solid state storage.

But, on its own NVMe is somewhat limited because it is a device connection best suited to in-server or directly-attached storage.

What enterprises need is to connect flash storage seamlessly over a network, to unlock performance advantages and replace conventional disk-focused SAN technology, such as iSCSI and Fibre Channel. NVMe-over-fabrics (NVMe-oF) aims to do just that.

NVMe uses the PCI Express bus, rather than older interfaces such as SATA and SAS – designed for spinning disk – and so removes much of the bottleneck between processor and storage. That brings high IOPS, low latency and parallel architecture with multiple channels linking the CPU and flash memory, make NVMe the obvious choice.

The key, for many enterprise storage deployments, however, is to make that available via shared storage. That’s what NVMe-oF tries to solve.

NVMe-over-fabrics describes a wide range of technologies. Each suits different workloads and use cases and offers different performance benefits.

The practicality of NVMe-oF depends also on the enterprise’s existing infrastructure and the protocols they use, and whether upgrading either or both is worth it for improved storage performance.

Read more about NVMe-oF

NVMe-oF works by wrapping NVMe commands into one of several network protocols. These include Fibre Channel, iWARP, RDMA over Converged Ethernet, (RoCE), Infiniband, and most recently, TCP. Some suppliers claim speeds of up to 100 Gbps over Infiniband and Ethernet; Fibre Channel is typically slower at up to 32Gbps.

The principal disadvantages of NVMe-oF are complexity and cost. To benefit from NVMe-oF enterprises must invest in new hardware, and that possibly includes network as well as storage system upgrades.

The technology is also still relatively new, so it is most likely to be found supporting high-performance applications that include AI and machine learning, large-scale business analytics, and time-sensitive, data-rich applications in areas such as financial services.

“Today, enterprise adoption of NVMe-oF is moderate overall,” says Julia Palmer, a research vice-president at analysts Gartner. “NVMe-oF complexity and costs are currently barriers to the broad adoption of the technology, and will continue to be so in the near future,”

“A variety of highly performant workloads – AI/ML, high-performance computing in-memory databases or transaction processing – can leverage NVMe-oF today. But most mainstream workloads are not planning quick transitions to end-to-end NVMe architecture.”

Performance

Performance is the first and foremost reason to adopt NVMe-over-Fabrics. The arrival of Flash storage means that network performance rather than drive read-write speeds is the bottleneck.

NVMe addressed the storage bottleneck by providing low latency, improved IOPS and the ability to read or write from storage in parallel, which is hard to achieve with spinning disk.

NVMe-oF takes that performance into shared storage, and brings some additional benefits such as the ability to scale up – NVMe-oF systems can support thousands of devices – and to create multiple paths between the NVMe host initiator and the storage, as well as multi-host support.

The result is large, fast and flexible systems that can handle the most demanding compute tasks as long as the network can keep up.

Protocols

NVMe-oF is a flexible standard. Because it supports a range of network architectures, CIOs stand a good chance of being able to reuse some of their existing SAN assets, either directly or via an upgrade. Widespread industry support for NVMe-oF means existing suppliers are also likely to have a path to the technology.

Systems that use RDMA come in two versions: RDMA over Converged Ethernet, and Internet Wide Area RDMA Protocol (iWARP), which operates over Ethernet or InfiniBand. RDMA implementations are usually new-build networks, to ensure performance. Meanwhile, Fibre Channel Gen 6 supports NVMe-over-FC, as well as software-defined storage

The newest option, NVMe-over-TCP, offers the potential to use any sufficiently fast Ethernet network. As such, it is a logical upgrade for organisations that use iSCSI SAN storage.

According to Gartner’s Palmer, NVMe-over-TCP will remove a barrier to mainstream NVMe-oF adoption. But, she cautions, the technology does not yet have broad vendor support.

Leveraging existing infrastructure

Most RDMA-based NVMe-oF systems will be “new build”, but NVMe’s flexibility mean there are other options. Upgrading Fibre Channel to support NVMe storage can be the simpler path: Gen6 FC systems can coexist with NVMe-oF, provided HBAs are at least 16Gbps and preferably 32Gbps, and that the storage targets support NVMe-oF.

The industry body for Fibre Channel, FCIA, is pushing vendors to make devices that can support spinning disk, SSDs and NVMe from a single adapter.

NVMe-over-TCP should be even more flexible. It represents the least extensive upgrade, as existing Ethernet LAN infrastructure can be reused if network switches and controllers are sufficiently capable.

NVMe-oF Deployment flexibility

NVMe-oF is flexible because of its wide industry support, and because the building blocks – NVMe-connected Flash storage, Fibre Channel, Infiniband and TCP are all well understood.

Add in software-defined storage, and with it the ability to use NVMe-based storage in a NAS as well as SAN and direct-attached configurations, and it allows IT teams to reduce the number of vendors that supply hardware for storage and potentially for the network.

For where absolute performance matters, however, CIOs can opt for RDMA, at the possible expense of less scalability.

NVMe-oF Use cases

Use cases for NVMe-oF are usually only limited by budget. There are few workloads that will not benefit from a move to NVMe.

Cost – and the cost of redesigning systems to support the technology – has though limited NVMe-oF to areas where performance is critical. These are primarily high performance computing (HPC) workloads, AI and machine learning, and analytics, especially analytics running on newer platforms such as Splunk, Tableau and MongoDB. It is also increasingly popular for Devops.

Performance for NVMe-oF is such that the gap between a network-based system and direct-attached flash storage is narrowing, and is certainly within the performance requirements of enterprise applications. NVMe-oF’s flexibility allows IT teams to start small, with an investment of under US$100,000.

“Most storage arrays vendors already offer solid-state arrays with internal NVMe storage. During the next 12 months, an increasing number of infrastructure suppliers will offer support of NVMe-oF connectivity to compute hosts,” notes Palmer.

A disadvantage of NVMe-oF is that it shares downsides with the underlying NVMe, flash-based architecture. Software needs to be written, or rewritten, to allow for its characteristics, not least because flash storage performance can degrade over time.

But that is not a specific drawback to NVMe-oF, and with NVMe-based storage now already widespread in enterprise servers, it is a problem that most suppliers and IT teams are already addressing.

Next Steps

NVMe Market Research

Accelerating Performance With NVMe-oF

Evolution of Ethernet-attached NVMe-oF Devices and Platforms

Optimizing NVMe-oF Storage With EBOFs & Open Source Software

Read more on IT suppliers

CIO
Security
Networking
Data Center
Data Management
Close