How big storage vendors can win the server-side flash wars

With Fusion-io up against EMC’s Project Lightning, it’s becoming clear how the big array vendors can win the server-side flash wars in the battle to speed storage I/O.

EMC’s Project Lightning server-side flash will put the company in direct competition with Fusion-io, the fast-growing startup that is punting its ioDrives into Apple, Facebook and, among others. Suddenly, server-side flash is on the verge of becoming mainstream and affecting networked disk drive storage arrays. Why is that?

Fusion-io strikes at SAN and NAS arrays with a three-bladed axe. Blade 1 strikes at disk latency; disk seek time is a burden customers no longer have to bear when accessing primary data, the argument goes. Flash responds to I/O requests roughly 1,000 times faster than disk, and server CPU cycles need no longer be wasted while a disk head moves across a disk platter's surface at glacial speed compared with a processor's clock ticks.

The second blade hacks away at network latency; have your storage media next door to the processor cores and their DRAM; have them and the flash storage use the same PCI Express (PCIe) bus and there is no need to wait for data to be put into Fibre Channel or NFS/CIFS packets, traverse a host bus adapter (HBA) or NIC card, jump across a network link to a switch and maybe another switch, enter the storage array controller, go through its layers of code, and then make another array network hop to the disk drives. You can simply cut all that overhead clean away.

Strike 3 is at the server operating system's I/O substructure. If you treat the flash as part of the server's memory so that it is in an application's address space, you can transfer data to and from it without making a journey through the operating system's disk I/O subsystem and its thousands of lines of code. This also saves time, not as much as avoiding latency and disk seek time, but it’s time nonetheless. A Fusion-io Auto Commit Memory (ACM) demonstration using this technique provided a 16x increase in the IOPS rate from its ioDrive flash cards.

Add the three flash strikes together and servers can get a tremendous boost to the speed at which they execute I/O-bound applications. Basically, every disk I/O becomes an in-memory transfer. How on earth can EMC, largely a shared disk drive array vendor, compete with that?

Well, the two companies’ visions of how server-side flash works differ. EMC's idea is that server-side flash has to be loaded with data and that loading is best done by a shared storage array. Sure, Fusion-io's ioDrive can accelerate every server to which it is fitted, but that soon becomes expensive. Most applications are not I/O-bound, and most will not benefit that much from having their entire working set of data in memory. Most will, however, benefit from caching the data they need.

With Project Lightning, also known as VFCache, many servers will have EMC flash cards managed by a connected VNX or VMAX array, which maintains the cache of data in the flash. The scene is set for PCIe flash cards to be used as cache by many servers with a back-end EMC array pulling the strings. Not having them as a memory tier also means they don’t take responsibility for primary data storage away from storage arrays.

For EMC VNX and VMAX customers, the appeal of using their storage infrastructure to boost server performance through array-managed server flash caches could well be a strong one. It remains to be seen whether the Project Lightning caches will cache writes as well as reads and how that affects vMotion of applications in virtualised servers, but EMC will surely have a response to this issue.

Looking ahead, it seems highly likely that flash will become a permanent fixture in servers. Dell, Cisco, HP, IBM and others will spur one another on to add flash and make their servers more capable than their competitors’ at running applications faster and supporting more virtual machines. Cisco has already made a virtue out of supporting very large memory configurations with its UCS servers.

LSI has announced that Cisco will be using its WarpDrive solid-state drives in coming blade servers. The scene is surely set for Vblocks -- the integrated server, storage and networking hardware running VMware from the VCE Alliance (VMware, Cisco, EMC) -- to include flash-filled Cisco blade servers.

Read more on SAN, NAS, solid state, RAID