Flash storage and what to do with it have been among the hottest topics in the world of storage for a couple of years now.
That trend has been driven by a small set of startup companies offering all-flash array products. These have met the needs of organisations that have needed to supercharge desktop and server virtualisation projects as well as the needs of business critical transactional operations.
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
The mainstream top six storage vendors have had to respond to these customer needs and to the efforts of the startups that have begun to take market share from them.
The challenge for the incumbent suppliers has been to meet the performance needs of flash in a dedicated array. These include the need for faster backplanes and processing capability in the hardware controller than was required with spinning disk HDDs, as well as operating system (OS) software that is optimised to manage the multiple voltage, garbage collection, wear levelling and so on, that flash memory requires.
Broadly speaking, the storage giants have responded in one of two ways.
They have either bought a flash array startup (EMC, IBM) and rebranded its products to fit their roadmap or they have adapted existing storage products to accommodate flash (HP, NetApp, Dell). The one exception is Hitachi Data Systems (HDS), which has developed a flash-optimised hardware/software module for an existing SAN system.
Clearly, the different approaches have their pros and cons. While those that have bought startups mostly have the edge in terms of performance, they buy this at the price of having flash arrays that don’t integrate with their other storage products and management frameworks.
In this article we look at how the big six storage suppliers efforts towards an all-flash array have shaken down to date. It’s a snapshot of where these vendor’s efforts have got to so far, and it will clearly evolve as some are clearly ahead of the game while others lag.
EMC bought in its flash array offering with the purchase of Israeli startup XtremIO in May 2012. Earlier this year it rebranded all its flash offerings (ie PCIe server flash and flash caching software too) around the Xtrem prefix and announced the XtremIO flash array would be generally available from this July.
The XtremIO flash array is an iSCSI- and Fibre Channel-connected all-MLC array that comes in 10TB X-Bricks. Customers can add capacity and I/O performance as they add X-Bricks, from 250,000 4k random reads with one unit, up to 1 million IOPS with a four X-Brick cluster.
The XIOS OS has inline data deduplication to boost actual capacities.
While XtremIO is clearly an array developed from the ground up to work with flash memory, it is effectively a standalone product with its own OS that cannot work in the management EMC storage family. This will be addressed in due course say EMC.
HP first announced an all-flash version of its 3Par P10000 array in July 2012, but recently added to that with the 3Par StoreServ 7450 array with upgraded controller hardware with faster Intel Sandy Bridge CPUs.
Despite not being a ground-up flash design the 3Par OS and controller ASIC possess some flash-friendly features, such as data striping that helps reduce wear and fine levels of granularity suited to flash cell block sizes.
HP claims I/O performance of 548,000 IOPS with 48 400GB MLC drives. SLC drives are also available in 100GB and 200GB capacities despite SLC falling from favour somewhat with enterprise storage makers.
NetApp has to date only released a flash-equipped retrofit of an existing array, the EF540, which is the (formerly Engenio) E5424 array, with either 9.6TB or 19.2TB of eMLC SSD in 12 or 24 drive slots, and which will deliver around 300,000 IOPS.
Despite being something of a laggard in the flash array stakes, NetApp has given hints of a forthcoming new clustered OS that will be tailored to flash. Called FlashRay, it is being developed by an R&D team led by former NetApp CTO Brian Pawlowski.
Solid facts are scarce, but NetApp said it will deliver: low latency; high IOPS; premium features such as data deduplication, compression, snapshots and replication; object data management; and scale-out clustering capability.
A beta version of FlashRay is due in mid-2013.
IBM took the acquisition route and bought Texas Memory Systems in 2012. In April this year, it rebranded the TMS RamSan systems as IBM FlashSystem and integrated them with its SAN Volume Controller (SVC) storage virtualisation device.
FlashSystem 820 and 810 use MLC flash, while the 710 and 720 use SLC, the latter despite IBM planning to drop SLC.
IBM has promised to invest $1bn in its flash products over the next three years.
HDS announced the availability of the Hitachi Accelerated Flash Storage (HAFS) module for its enterprise SAN VSP platforms, last November. The HAFS module comprises a controller with software developed around MLC flash.
It can scale from 6.4TB up to 76.8TB and HDS claims it can provide up to 1 million IOPS and up to four flash enclosures can be housed in a VSP array to provide more than 300TB of flash. VSP can also treat HAFS as a distinct tier of storage using its Hitachi Dynamic Tiering.
Dell has taken the route of modifying existing products to meet the need for flash performance. Namely, it has upgraded its Dell Compellent Storage Center OS to version 6.4 and previewed the availability of an all-flash Compellent array, the Flash Optimised Solution.
The Flash Optimised Solution will be powered by Storage Center 6.4 and comprise SLC and MLC drives in one of its SC220 expansion enclosures. Dell says it will get 300,000 IOPS for this mixed SLC/MLC bundle, which is quite low, especially with SLC on board, and suggests Dell has not optimised Storage Center to deal with the back-end tasks of managing flash memory.
Instead, Dell focuses on the use of auto-tiering to try and get the most from flash. Here, the company places the emphasis on Storage Center’s ability to move data at sub-LUN level, with different parts of the same LUN living on different classes of storage media.