Chepko Danil - Fotolia

NVMe over fabrics vs Fibre Channel, Ethernet, Infiniband

NVMe over fabrics takes the built-for-flash advantages of the PCIe-based protocol, and allows NVMe to be sent via Fibre Channel, Ethernet and Infiniband networks

This article can also be found in the Premium Editorial Download: CW ANZ: CW ANZ: CIOs set sights on cloud-based infrastructure

We recently looked at NVMe, a PCIe-based protocol that allows computers to communicate with storage in a way optimised for flash storage, with huge increases in input/output (I/O) and throughput possible compared with spinning disk-era SAS and SATA protocols.

As we saw, currently NVMe is a slot-in for existing PCIe server flash use cases, although storage array suppliers have started to develop arrays that utilise NVMe-connected flash storage, and this is more than likely the future direction of in-array connectivity.

But the I/O chain, obviously, does not end at the storage array. Mostly, datacentre usage sees multiple hosts hooked up to shared storage. So, to preserve the advantages of NVMe between host and array, NVMe over fabrics (NMVf), has been developed.

The key idea behind NVMf is that the advantages of NVMe – high bandwidth and throughput with the ability to handle massive amounts of queues and commands – are preserved end-to-end in the I/O path between server host and storage array, with no translation to protocols such as SCSI in between that would nullify those benefits.

In short, the huge parallels that NVMe offers are retained across the storage network or fabric.

NVMe is based on PCIe communications protocols, and currently has the performance characteristics of PCIe Gen 3. But traffic can’t travel natively between remote hosts and NVMe storage in an array. There has to be a messaging layer between them.

That messaging layer is essentially what NVMf comprises. So far, the NVM Express group has devised fabric transports to allow remote direct memory access (RDMA) and Fibre Channel-based traffic with the aim of not increasing latency by more than 10 microseconds, compared with an NVMe device in a PCIe slot. RDMA allows a direct connection from one device to the memory of another, without involving the operating system.

Read more about NVMe

RDMA-based protocols include RoCE or RDMA over Converged Ethernet; iWARP or internet wide area RDMA protocol, which is roughly RDMA over TCP/IP; and Infiniband.

NVMf performance should be governed by that of the networking protocol used, so bandwidth with Ethernet will be in the hundreds of gigabits per second, Infiniband at tens of gigabits per second per-lane and Fibre Channel hitting 128Gbps with its Gen 6.

NVMf products

It’s early days, so there isn’t much in the way of products at the time of writing.

NVMf will be supported by default at the hardware level in NVMe cards and drives, as well as NVMe-enabled arrays. But, so far, the products necessary to convert existing networks and fabrics, such as NICs and HBAs, for hosts as well as switches, are thin on the ground.

In February 2016, Broadcom released samples of Gen 6 Fibre Channel adapters to manufacturers, but there’s no sign of generally available products yet.

On the RDMA NIC front, there seems to be little available yet, although Mellanox (RoCE) and Chelsio (iWARP) have product pages on their websites. Meanwhile, Mangstor makes arrays with NVMe storage that can be connected to hosts via Mellanox NVMe-capable switches.

Read more on Computer storage hardware

CIO
Security
Networking
Data Center
Data Management
Close