sdecoret - Fotolia
Super-fast NVMe flash storage on vanilla Ethernet – that’s the claim made by Israeli startup LightBits Labs, which has launched its Super SSD hardware and which aims to bring affordable NVMe in an array at the performance of server-attached disk.
LightBits’ Super SSD comes in 2U format nodes, which can house up to 24 NVMe drives of between 4TB and 11TB. Raw maximum capacity is 264TB, which translates to 1PB of effective storage after compression/deduplication. There are two 100Gbps Ethernet ports per node.
Super SSD can accommodate two LightField cards per node, with each connected to 12 NVMe drives. Internal latency of the drives is lower than 100μs, while disk latency from a 100Gbps-connected server is lower than 200μs. LightBits claims its hardware can achieve 5 million IOPS.
LightBits Labs aims to compete with NVMe-over-fabrics (NVMf) solutions that are coming to market but which are based on more costly NVMf schemes, such as RDMA over converged Ethernet (ROCE) and iWARP on Infiniband. Its savings come from use of already-deployed standard network switches with no need to install any card in the server.
LightBits’ solution is based on its LightOS operating system, which converts TCP/IP packets to NVMe streams on the fly. Components are an x86 server with NVMe drives deployed and Ethernet ports – the LightBox – that is accessible to Linux servers on the same network and which have an NVMe-overTCP/IP driver installed.
This set-up allows hosts to access storage with a latency of less than 200μs, which is a similar level of performance to flash drives directly attached to servers.
In the LightBox array, the key function of the OS is to route parallel communications between Ethernet ports and NVMe cards. But LightOS also brings advanced functionality that includes thin provisioning, compression, RAID, erasure coding, compression, balancing writes across media to extend life, and a multi-tenant management function that allows capacity to be divided between storage from different suppliers.
The Lightbox can be deployed with a LightField acceleration card that, with the help of an ASIC, compresses and rehydrates data at around 20GBps. That’s a rate of about 4x that obtained when using just the server’s x86 processor.
LightBits’ project has drawn attention from Dell EMC, Cisco and Micron, which have already invested $50m. The attraction is that LightBits will allow iSCSI for the NVMe generation.
Founder and chairman is Avigdor Willenz, who previously developed an Ethernet switch controller chip at startup Galileo Technologies, which was sold to Marvell Semiconductor in 2000. He also co-founded Annapurna Labs, which was bought by Amazon in 2015.
Read more on NVMe flash storage
- NVMe flash offers blistering performance gains but so far the big five storage array makers have tended to opt for gradual implementations rather than radical new architectures.
- Part one of two: All-flash is mainstream, with NVMe also offered. Dell offers NVMe drives while HPE reserves it for use as storage-class memory as a cache layer.
Normally, NVMe and Ethernet-based array-type storage products use controller cards such as ROCE – in other words, a protocol that sends the contents of server RAM over the network.
ROCE reduces the number of protocol layers in TCP/IP to maximise useful data in traffic. In this way, it compensates for the relative slowness of Ethernet compared to the internal bus that NVMe relies on.
On the other hand, this technique needs dedicated ROCE cards and switches, which are expensive compared with traditional network switches – a drawback that the LightBits solution avoids.
Last July, SolarFlare had the same idea as LightBits – to offer NVMe-over-fabrics that is directly compatible with 100 million Ethernet ports installed every year in the enterprise. In the case of SolarFlare, it supplies its XtremeScale adapters, which appear as NVMe drives.
These then translate storage I/O requests into TCP/IP packets, in theory at least as rapidly as do ROCE cards, but with the advantage of sending packets over the network that are routable in Ethernet.
SolarFlare claims to achieve the same performance as ROCE systems, at around 120μs (read) and 46μs (write). Speed, however, depends on the Ethernet network spec, with about 1GBps on a 10Gbps network and 10GBps with 100Gbps. By comparison, the quickest NVMe throughput speeds attain about 3.5GBps.
LightBits delivers a similar outcome, but without the need to buy a proprietary card. It works with software on the server side that brings the same logic as the SolarFlare cards. The lack of a hardware solution explains why latency for LightBits is a little higher.
Meanwhile, fellow startup Excelero also supplies a software-based NVMe-over-fabrics solution. Its NVMesh allows the creation of shared block storage from multiple servers on the same network via ROCE, Fibre Channel or TCP/IP. Nevertheless, Excelero makes the point that its implementation does not optimise especially for TCP/IP.