Server flash has emerged as a popular option for the addition of solid state storage to the compute/storage stack. It adds IOPS and throughput-boosting flash memory right next to the application, and is often configured as cache working with the host’s memory.
But putting flash in one server also potentially creates a whole bunch of issues that we thought had been left behind with the advent of shared storage – namely the creation of what are effectively instances of direct-attached storage and silos of data.
But now products are emerging that aim to enable sharing server flash. In this podcast, I examine the shortcomings of server flash and look at some of the products emerging that enable sharing of PCIe flash between hosts.
Server flash benefits and shortcomings
The server is one of the places you can put flash to speed up storage in your IT environment. Putting PCIe flash in the server boosts IOPS and throughput and reduces latency more than would be the case in other applications of flash, such as in the storage array, because data is held right next to the processor that is crunching the app.
Server PCIe flash is most often multi-level cell (MLC), and often comes with caching software that can integrate the flash card with the server’s memory and optimises its performance by, for example, shipping writes straight off to the storage array but holding reads for a rapid response to the needs of the app.
Suppliers of server flash include specialists such as Fusion-io and mainstream storage suppliers that have added server flash to their offerings, such as EMC’s XtremSW, NetApp’s Flash Accel and Dell’s PowerEdge PCIe Flash.
Server flash is most often useful for a performance boost for specific apps and their databases – in other words, things that are going to reside on that physical server. Virtualisation applications and clustered apps are less likely to benefit from flash dedicated to one server because of the likelihood in these cases of virtual machines (VMs) being moved around and the need to share physical resources.
The downsides of server flash to date have been that once you put that PCIe card in a server, that is where it stays and is the only place its benefits are reaped. If that server goes down, what happens to that data? You would need to have high availability mirroring between identical server instances to ensure no data protection threats from server outage.
Also, there are the shortcomings already mentioned; how do you use clustered apps with server-side flash, for example, or make it suited to virtualisation scenarios with workloads shared between hosts?
Sharing server flash via hardware and software
The ability to share PCIe server flash is becoming increasingly possible through hardware and software solutions.
Most recently announced was QLogic’s FabricCache QLE10000 card, which combines the functions of a PCIe SLC flash card with a Fibre Channel HBA (host bus adapter) and connects together separate instances of flash for use by all servers in a cluster, as well as with drives in the array. It currently comes only with 8Gbps Fibre Channel connectivity, but 10Gbps Ethernet for iSCSI storage is promised.
The QLogic product is targeted at customers that want to run clustered enterprise applications or virtualised environments where workloads are shared across servers and provides one shared instance of cache per cluster.
Currently, QLogic’s appears to be the only hardware solution to the challenge of sharing server flash.
There are some software products, however.
Sanbolic’s Melio attacks the problem from a different direction, as software that replaces Windows and virtualises PCIe and SSDs in servers, all in a software-defined storage layer in the hypervisor, to make them act as a single shared file system.
It is available only in Windows, but Linux is planned and works over Infiniband and Ethernet.
Meanwhile, Virident Systems has a beta out of its FlashMax Connect software that includes three modules: synchronous high-availability mirroring between server flash; sharing of server flash to other hosts; and PCIe server flash cache capability.
FlashMax Connect only works with Virident PCIe cards. It is scheduled for general release in April and future versions will work with other suppliers’ cards. Plans exist to enable cache pooling between clustered servers.
Violin Memory recently got into the PCIe server flash market with its Velocity cards. Currently the cards provide only standalone capacity, but there are plans to add replication and mirroring later this year using technology gained when Violin bought Gridiron Systems in January.
Look out for server flash sharing capabilities from PCIe flash pioneer Fusion-io. The company recently bought UK open source storage and SCSI specialist ID7 and says it will develop products using this expertise that create shared pools from multiple server-based instances of storage.
Also, Dell has its Project Hermes, which resulted from its acquisition of RNA Networks in 2011 and includes sharing of server flash in its roadmap, although no products have yet emerged.
It is early days for this product category and an uneven market. But sharing server flash is something that should come to maturity in the next year or so.