Flash array maker Tegile is all set to make the most of NVMe flash, but it’ll be some time before that full potential is available to be exploited.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
That’s the view of chief technology officer Rajesh Nair, who spoke to Computer Weekly this week.
According to Nair, Tegile’s architecture is in place to make the most of NVMe flash because Tegile storage controllers can be scaled out as a cluster, at least within limits.
Right now the industry is wrestling with how to unleash NVMe flash, which offers blistering performance gains for flash storage by allowing much greater bandwidth and input/output operations per second (IOPS) than existing storage protocols based on SCSI.
But the full benefits of NVMe are difficult to achieve and array makers vary in how they think it should be done.
The key obstacle to NVMe are the storage array controller CPUs. They must carry out the basics of storage functionality as well as provide enterprise features, but currently it is proving a challenge to provide enough CPU resources to fully exploit NVMe.
So, some array makers have released products with NVMe drives, but left the rest of the input/output (I/O) path intact. Tegile has done this with its IntelliFlash N arrays, as has Pure Storage with its FlashArray//X.
Read more about NVMe
- NVMe offers huge possibilities for flash storage to work at its full potential. But is hyper-converged the answer to the controller bottleneck?
- NVMe offers to unleash performance potential of flash storage that is held back by spinning disk-era SAS and SATA protocols. We run through the key NVMe deployment options.
In both of these cases NVMe drives are plugged into the back end. That gives speedier access into drives but the I/O path between storage controller and hosts is still dependent on protocols that contain the – spinning disk era – SCSI at their heart.
That allows, for example, of performance improvements of a few x. “About 4x in terms of IOPS, which is more than the latency improvement. That’s cut by about half,” he said.
“The host side is the reason latency is only cut by a half,” he said. “The bottleneck is between the storage and hosts. First generation NVMe devices are not pushing the envelope in terms of what they can do.”
Here, Nair differs from many other array makers that are tackling NVMe. He’s saying the issue isn’t particularly at the controller (for now) because current generation NVMe drives are bottlenecked in the I/O path out to hosts.
“Two things need to happen to get better latency,” he said. “The application stack needs to start speaking NVMe. And Ethernet need to be a little bit better, with higher bandwidth and better lossless capabilities.”
The “ultimate utopia”, said Nair, is for host connectivity via RDMA-over-converged-Ethernet (ROCE) which promises memory-level performance over local area connections and with enterprise features, such as data protection, encryption, data deduplication etc.
But what about the controller in Tegile arrays? How will the company tackle the controller bottleneck?
The answer other players in the industry seem to be coming up with is to distribute controller processing. Some have tried offloading it to host CPUs (E8 and Datrium), NVMe cards in the hosts (Excelero) or HBAs and app functionality (Apeiron).
Meanwhile, Kaminario said the answer will be scale-out clusters of controllers and Scale Computing has done that by incorporating NVMe into hyper-converged infrastructure, as well as eliminating inefficiencies from the data path.
“Intelliflash is a two-node scale out cluster but can scale out to four or eight nodes,” said Nair. “We’ll need more than that in future. But we will solve it quicker than others.”
“Scale out isn’t what’s holding us back. It’s NVMe over Ethernet. That’s not yet reality in consumable form with the right economics. When it is, our architecture is already built.”