Konstantin Emelyanov - Fotolia
NVMe solid state array maker Vexata has come out of stealth with a family of NVMe-connected NAND flash and 3D Xpoint products it claims offer near bare-metal – ie, tens of microseconds – storage input/output (I/O) performance.
Its products comprise the VX-100 solid state arrays with either NAND flash or Intel Optane 3D Xpoint media; VX-Stack appliances tailored for specific applications such as Oracle, SQL Server, SAS etc, and; VX-Cloud, a software-defined version of the product.
The two enablers for NVMe in Vexata’s approach are its VX-OS controller operating system and the use of central processing units (CPUs) at the controller and at each of the storage media blades.
“VX-OS separates the control and data path,” said CEO and founder Zahid Hussain. “VX-OS Control provides the intelligence to programme the [in-built] VX-OS Router to distribute I/O to the storage modules.”
In this way the dual controllers’ CPUs avoid a lot of I/O processing. This is offloaded to CPUs at the storage blades, which could total between four and 16 per chassis. Back-end connectivity is via Ethernet remote direct memory access (RDMA), which is built for memory speeds.
By throwing CPU resources at the required processing and applying the efficiences of its operating environment Vexata claims latency of 40µs using Optane, 220µs with flash solid state drives (SSDs), and 7 million random input/output operations per second (IOPS) in a VX-100 appliance.
NVMe is an emerging protocol that potentially enables solid state storage to operate at near its full potential. It does away with small computer system interface (SCSI), a key component of the software stack in most existing storage systems but one that dates from the spinning disk era, with characteristics that tend to slow I/O in solid state media.
Read more about NVMe
- NVMe offers huge possibilities for flash storage to work at its full potential. But is hyper-converged the answer to the controller bottleneck?
- NVMe offers to unleash performance potential of flash storage that is held back by spinning disk-era SAS and SATA protocols. We run through the key NVMe deployment options.
Currently, those offering NVMe products are wrestling with how to exploit NVMe’s blistering speeds while offering storage controller functionality that tends to drain performance.
Vexata is not NVMe end-to-end, but does offer enterprise storage features. “We provide snapshots, clones, encryption, active-active controllers; all you’d expect from enterprise storage, but just happen to use the latest media,” said Hussain.
The key to achieving this, he said, is that “VX-OS is a distributed OS that runs on numerous CPUs, on the controller and the blades, scheduling I/O, managing queues, scheduling reads and writes with writes hald on a log file and written in large blocks.
“The secret sauce is that we separate the control and data paths. It means we don’t have to keep up with all data going through the controller CPUs,” he said. “Also, VX-OS runs in Linux user space with no kernel switches; it’s a lockless architecture and takes full advantage of CPU cores.”
Retaining enterprise storage features
Vexata is one of a number of storage array makers trying to exploit the performance gains of NVMe flash (and in some cases 3D Xpoint) while retaining enterprise storage features.
The key roadblock to this has been that the storage array controller needs to expend CPU resources to carry out basic and advanced storage functionality and this saps the benefits of NVMe.
The answer players in the industry seem to be coming up with is to distribute that processing. Some have tried offloading it to host CPUs (E8 and Datrium), NVMe cards in the hosts (Excelero) or host bus adapters (HBAs) and app functionality (Apeiron).
Meanwhile, Kaminario said the answer will be scale-out clusters of controllers and Scale Computing has done that by incorporating NVMe into hyper-converged infrastructure, as well as eliminating inefficiencies from the data path.
So, Vexata’s solution fits largely with these principles but is a slight variation in that it offloads processing to CPUs in the storage blades.