kalafoto - Fotolia
Super-fast NVMe flash storage is “ready for prime time” and use by enterprises, with performance in the order of 10x better than existing Fibre Channel with the use of NVMe-over-fabrics.
That’s the view of Virtual Instruments’ product management director Henry He, who spoke ahead of a planned demo of end-to-end NVMe at Dell EMC World in Las Vegas – which takes place 29 April to 2 May – by Dell EMC, Cisco, Virtual Instruments and SANblaze.
In particular, said He, NVMf will allow users to tackle the problem of so-called “tail latency” where storage and networks do their best to ensure the bulk of traffic performs within acceptable limits, but where a few key input/outputs (I/Os) can lead to degradation in performance targets.
He cited airline bookings specialist Amadeus as a company that has identified “tail latency” as a key issue. End-to-end NVMe can provide the performance to wipe out this issue, said He.
“NVMe is now cost effective and performant end-to-end, from the host to the storage via server, HBA and the network, with a well-supported ecosystem,” he said.
The demo planned for Dell EMC World will use a single Dell EMC Powermax 2000, a Cisco 32Gbps MDS switch, plus test loads and monitoring from Virtual Instruments and SANblaze to achieve throughput of 6GBps, at 1 million input/output per second (IOPS) and latency of 180µs.
That compares very favourably to what is achieve from end-to-end in most deployments today, said He: “Today, in Fibre Channel infrastructure, before NVMe, we see latency in the range from 1 to 5 or 8 milliseconds. So, we’re talking end-to-end a 10x improvement here.
“When NVMe first came out, it was local storage in servers. More recently, storage products have had NVMe inside, but that wasn’t true end-to-end. Now we can say, with NVMf, that it is a shared storage technology.”
NVMe is a built-for-flash storage protocol based on PCIe. It massively increases the parallelism possible in storage traffic by hugely boosting number of queues and the queue depth available in storage I/O.
It allows for up to 64,000 queues and queue depth to the same number in each. This compares to one queue and queue depths of 254 and 32 respectively in SAS and SATA, which are carriers for SCSI, the predominant storage protocol since its development for mechanical hard drives.
That boost in performance has, however, shifted the bottleneck in storage I/O to the controller, said Cisco product manager Ardash Viswanathan.
“It’s true that the storage controller is a bottleneck,” said Viswanathan. “NVME has achieved massive parallelism in processing I/O, but storage controllers lack this the same extent.”
“The industry will work to resolve this, however, and the bottleneck will get moved to the application layer eventually where parallel processing will need to be written into code.”
Read more about flash storage
- Part one of two: All-flash is mainstream, with NVMe also offered. Dell offers NVMe drives while HPE reserves it for use as storage-class memory as a cache layer.
- Part two of two: While all-flash is mainstream, NVMe is an option as disk replacement for most suppliers, while NetApp leads the way with NVMe end-to-end to hosts.