stefanocar_75 - Fotolia

Vexata doubles capacity in NVMe flash storage VX family

NVMe flash storage Vexata maker ups capacity to just under half a PB with 8TB drives, claims near-bare metal performance, and aims at analytics, trading and simulations workloads

NVMe flash storage maker Vexata has doubled capacity in its VX family of high performance solid state arrays by use of 8TB NVMe flash drives.

The move pushes useable capacity to 435TB  in a 6U blade system. Alongside that, NVMe-over-fabrics connectivity has been added, and an improvement in throughput is posted.

Vexata markets a family of NVMe-connected Nand flash and 3D Xpoint products, for which it claims near bare-metal – that is, tens of microseconds – storage I/O input/output (I/O) performance.

Its product offering comprises VX-100 solid state arrays with Nand flash or Intel Optane 3D Xpoint media. There are also VX-Stack appliances tailored for specific applications, including Oracle, SQL Server and SAS – as well as VX-Cloud, which is a software-defined product.  

Core to Vexata’s approach NVMe flash is its VX-OS controller operating system that separates the control and data path, and deployment of field programmable gate arrays (FPGAs), which are custom chips with limited functionality, to offload functionality from controller CPUs.

NVMe is a flash-specific protocol build for solid state drives and does away with legacy SCSI transports that were developed for spinning disk media. In doing so, it boosts I/O and throughput hugely by boosting the number of channels and queues possible.

Vexata is aimed at users with large amounts of data, and for workloads that include analytics, internet of things (IoT) analytics, indexing, financial trading systems and engineering simulations. It currently has around 20 paying customers

“We’re not aiming at back office markets,” said founder and CEO, Zahid Hussain. “When 16TB drives are available, we will be able to offer nearly 1PB capacity.”

NVMe-over-fabrics (NVMf) connectivity was also announced, which adds to front-end Fibre Channel or Ethernet connectivity. At the backend, all traffic goes through lossless RDMA Ethernet.

According to Hussain, “customers are dipping their toes into NVMf”, as it extends the benefits of NVMe across the storage network, with the NVMe protocol carried via Fibre Channel or forms of Ethernet.

Companies that offer NVMe products are wrestling with how to exploit NVMe’s blistering speeds, while also offering storage controller functionality that tends to drain performance.

“We do that by slinging CPU power at it, and by offloading tasks to the FPGAs,” said Hussain. “Most people’s controller architectures are only able to light up a few SSDs at a time. Our controller architecture is able to light them all up.”

“We’re pretty close to full utilisation. We’re very good about how we’ve scheduled I/O. If we didn’t find ways to light up all the SSDs, we’d find it a challenge,” he added.

Read more about NVMe flash

  • NVMe could boost flash storage performance, but controller-based storage architectures are a bottleneck. Does hyper-converged infrastructure give a clue to the solution?
  • NVMe over Fabrics – with NVMe carried over Fibre Channel, Ethernet, Infiniband and other protocols – could revolutionise shared storage. We look at how the market is shaping up.

Read more on Storage performance

CIO
Security
Networking
Data Center
Data Management
Close