plus69free - Fotolia

Storage 101: Queue depth, NVMe and the array controller

We look at queue depth and fan-out and fan-in ratios. With NVMe they could become a thing of the past, but for now there’s still a bottleneck at the storage array controller

NVMe (non-volatile memory express) is set to revolutionise flash storage. It is based on the PCIe card slot format and allows for flash drives to connect via the PCIe slot and with a standardised method of connectivity that replaces proprietary card protocols and existing SAS and SATA drive stacks.

But the big news is in the nuts and bolts of NVMe and is that it vastly increases the number of I/O queues possible and the queue depth of those lines of requests.

Why is that revolutionary? We’ll see. First, let’s look at what queue depth is.

Queue depth as a base concept is fairly self-explanatory. It is the number of I/O requests that can be kept waiting to be serviced in a port queue.

SAS and SATA can handle queue depths of 254 and 32, respectively. If the number of I/O requests exceeds the possible queue depth, that transaction will fail to be re-tried some time after. Queue depth figures for SAS and SATA can be reached fairly quickly when you consider that a storage port with a high fan-out ratio could be servicing many hosts.

A key area of storage expertise is to tune the storage infrastructure to ensure queue handling capabilities are matched to host requirements and fan-out and fan-in ratios are set appropriately.

But that may become a thing of the past, with NVMe able to handle queue depths of up to 65,000. You can see why the vastly increased queue depth of NVMe is an important advance.

With SAS and SATA, the number of I/O requests lined up could very easily become a bottleneck. To avoid having I/O requests fail because of exceeded queue depths, you would have to create LUNs of many HDDs, “short stroking” them so that all I/O has somewhere to go pretty quickly.

Read more on NVMe and storage performance

With flash drives operating at tens or hundreds of times the IOPS and throughput of spinning disk HDDs, there is a bigger performance cushion to absorb I/O requests, and the possible 65,000 queue depth capacity of NVMe brings drive connectivity in line with this.

That’s all in theory, however, in most cases right now and in the near future. NVME’s huge queue-handling capabilities potentially offer a straight pass through for I/O traffic – a complete removal of the bottleneck.

Unfortunately, however, the bottleneck often remains at the storage array controllers, which in the main part are not yet built to deal with the performance possible with NVMe. There is a mismatch between controller CPU capabilities and the potential performance of NVMe.

So, for now, be sure to keep up with your I/O tuning skills in tweaking fan-in and fan-out ratios, queue depths, and so on. ........................................................

Read more on Virtualisation and storage

CIO
Security
Networking
Data Center
Data Management
Close