WavebreakmediaMicro - Fotolia

Scale Computing to bring NVMe flash to hyper-converged

Hyper-converged infrastructure maker Scale Computing is set to introduce NVMe connectivity to its products in 2017 with tiers of all flash also on the roadmap

Hyper-converged infrastructure box maker Scale Computing is set to bring in NVMe flash connectivity in 2017. That’s according to Scale founder and CEO Jeff Ready, who said the blisteringly fast PCIe-based protocol for flash will form one of several possible flash and disk-based tiers of storage in Scale’s products going forward.

“NVMe is on the roadmap for this year. More importantly, we will have flash in multiple tiers – 3D NAND, TLC, MLC – as well as NVMe, all-flash with different characteristics,” he said.

This tiering, said Ready, will be based on Scribe, which is the company’s scale computing reliable independent block engine. Scribe is a clustered block-storage software layer – that forms part of Scale’s HyperCore KVM-based hypervisor – and is designed to provide direct block-level storage access to virtual machines running on the hypervisor.

Scribe aggregates storage media into a single pool and handles traffic to physical blocks on drives, provides allocation to different tiers of storage as well as functions such as data protection.

NVMe promises significant storage latency reductions and increases in throughput, largely by being able to handle a huge increase in the number of queues and of queue depth compared to the SAS and SATA spinning disk-era protocols.

NVMe is a PCIe-based protocol developed for flash storage. Physically, NVMe hardware is compatible with PCIe slots but use with a storage controller, as found in most storage array products, will introduce a bottleneck to the input/output (I/O) path to some extent; one that NVMe was designed to remove.

According to Scale, Scribe runs out of band from the data path and so does not form a bottleneck. It is therefore ready to make good use use of NVMe.

Read more about NVMe

“Scribe uses parallel paths from the central processing unit to storage, using multi-queuing,” said Ready.

“Most hyper-converged products mimic the storage controller via a virtual machine and in most cases that virtual machine is running a file system. I/O is therefore typically traversing several file systems on one round trip.”

CW+

Features

Enjoy the benefits of CW+ membership, learn more and join.

Read more on Virtualisation and storage

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchCIO

SearchSecurity

SearchNetworking

SearchDataCenter

SearchDataManagement

Close