luchschen_shutter - Fotolia
The flash storage revolution of the past couple of years has been all about the startups, but maybe the tide has started to turn.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
They have been the companies unencumbered by existing product lines and architectures designed for the era of spinning disk and its technical requirements.
So, it has overwhelmingly been the case that market entrants have made the greatest strides with all-flash array products designed from the ground up to make the most of flash storage to meet the needs of desktop and server virtualisation as well as high-performance transactional workloads.
Likewise, it has been the startups which brought that innovation to bear in the creation of the hybrid flash array. This new product category filled a simple gap in the market that the big incumbents could not exploit.
Hybrid flash startups married hardware designed from scratch for the rapid access times and low latency of flash with the bulk storage capabilities of spinning disk, and used software intelligence – automated tiering and data deduplication – to place data on the media most suited to it.
It is a combination of flash and traditional disk technologies that makes perfect sense – match data to the performance and cost profile of the media – but it is a trick the big six suppliers have been slow to get on to.
In this article, Computer Weekly studies the key hybrid flash storage products of the pioneering startups in the field. And a look at the big six reveals little response in kind at all – with a couple of exceptions that mark a turn of the tide in which the big suppliers regain the technological high ground from the startups.
Hybrid flash array market – the startups
The creation of former Data Domain employees, Nimble Storage came out of stealth in 2010.
Its iSCSI hybrid flash arrays come in two key product lines – the CS200 series and CS400 – that marry bulk storage capacity on Sata spinning disk with multi-level cell (MLC) flash cache. Nimble uses compression and data deduplication to pack the maximum amount of data into disk bulk storage, while speeding access times on flash cache.
The CS200 entry-level models, for example, house 8TB total raw capacity (expandable to 76TB) with 160GB of flash. These span several iterations of performance and capacity to the top-end CS460 with 36TB of capacity (expandable to 249TB) and 2,400GB of flash.
Nimble aims to let customers scale capacity, compute and cache, and since the advent of Nimble OS 2.0 in August 2012 its hardware can be linked in a scale-out architecture.
It also uses data deduplication and tiering with flash and Sata drives to ensure that the vast bulk of I/O hits solid state storage.
The dual controller Tintri VMstore is an iSCSI (1Gbps Ethernet and 10GbE ports) array that comes in a 3U form factor with eight 300GB MLC drives and eight 3TB Sata drives with data deduplication and compression between the two.
Tintri specifically targets virtual machine environments. To do this it does away with volumes, LUNs and Raid groups and maps I/O requests directly to the virtual disk. This tight virtual machine (VM) integration lets VMstore control I/O performance for each virtual disk.
Tegile uses a combination of DRAM cache, MLC and SLC storage tiers and SAS HDDs plus a ZFS-based operating system (OS) adapted by Tegile to provide data deduplication, compression, Raid enhancements and a performance-boosting feature called Metadata Accelerated Storage System (Mass).
Mass allows data, once ingested, to be dealt with via just its metadata headers rather than the full copy, and these are kept in cache or solid state drive (SSD) tiers.
These smarts allow it to make claims that it can provide enterprise-level capacity and performance at something like 10% of the cost of some of the big six suppliers’ arrays.
Its Zebi HA series hardware ranges from the entry-level HA2100, with 600GB of flash and total capacity of 22TB raw (up to 5x that with dedupe and compression), up to the HA2800, with 4,400GB of flash and 44TB of raw capacity (also up to 5x that figure with dedupe and compression).
Tegile arrays provide iSCSI, Fibre Channel and NAS connectivity.
NexGen, recently bought by PCIe flash pioneer Fusion-io and rebranded ioControl, emerged from stealth in 2011 with its n5 hybrid flash array that combines Fusion-io PCIe flash cards, RAM and SAS spinning disk in x86 server platforms.
Like other hybrid flash players it uses automated tiering functionality and data deduplication to move data between solid state and spinning media.
The special sauce is the use of so-called performance quality-of-service (QOS) in its operating system that allows customers to provision data according to the performance they need from it.
The n5 arrays come in three models that range from the n5-50 with up to 1,460GB of flash and between 16TB and 160TB of disk to the n5-150 with up to 4,800GB of flash and between 48TB and 192TB of disk. ioControl arrays are iSCSI with 10GbE and 1GbE ports.
Nutanix emerged from stealth in 2011, offering the Complete Cluster, a datacentre-in-a-box that combines Intel processors with PCIe and traditional format flash in a VMware-ready node that could scale out with other nodes via its Scale Out Converged Storage controller software.
That was the NX-2000 family. In 2012 Nutanix launched the beefed-up NX-3000 hardware platform with Intel Sandy Bridge processors and 128GB or 256GB of RAM per node. Support now also extends to KVM and Microsoft Hyper-V hypervisors.
Storage capacity is provided by 400GB of PCIe SSD, 300GB of Sata SSD and 5TB of Sata HDD. A “starter kit” of four nodes (combined together to create a “block” in a 2U 19” form-factor chassis) is capable of supporting around 400 virtual servers or 900 to 1,200 virtual desktops.
A cluster uses the Nutanix Distributed File System (NDFS) to store data across all nodes, which are connected using Gigabit Ethernet. This manages data striping and replication as well as more advanced functions such as auto-tiering between solid state and Sata storage.
Hybrid flash array market – the big six
None of the big six storage suppliers have come to market with anything like the hybrid flash array products listed above, with flash and HDD combined in tiered configurations with data reduction techniques applied.
Most of the incumbents allow flash to be combined with spinning disk in what is technically a hybrid array.
But these are not products designed from the ground up for flash and its requirements. These are controller hardware that can cope with the access times and throughput of flash plus a storage operating system optimised to high-performance processors and managing the multiple voltage, garbage collection, wear levelling, etc, that flash memory requires.
When you put these together you get flash performance in terms of IOPS in the mid-hundreds of thousands to around one million.
The two big suppliers that have achieved this with hybrid flash arrays are EMC and Hitachi Data Systems (HDS), and in both cases this has involved substantial re-architecting of existing products.
Hitachi Data Systems announced the availability of its Hitachi Accelerated Flash Storage (HAFS) module for its enterprise SAN VSP platforms in November 2012.
The HAFS module comprises a controller with software developed around MLC flash that can scale from 6.4TB up to 76.8TB and provide up to one million IOPS. Up to four flash enclosures can be housed in a VSP array that can treat HAFS as a distinct tier of storage using its Hitachi Dynamic Tiering.
EMC announced a refresh to its VNX range of mid-range unified storage arrays in September that centred on an upgrade to array operating software to take advantage of Intel multi-core Xeon 5600 CPUs plus optimisation of array hardware for flash storage.
The existing VNX arrays’ Flare OS could not take advantage of multi-core CPUs and suffered a processing bottleneck when using flash drives. In the technology refresh, Flare has been rebranded MCx and optimised it to spread the VNX workload across up to 32 cores in the Xeon processors. Caching algorithms have also been improved in MCx.
VNX devices are the entry-level VNXe 3150 and the higher specification VNXe 3300 with maximum capacities of 144/288TB and 450TB respectively. The mid-range arrays are the VNX 5400, 5600 and 5800 with theoretical maximum capacities of between 750TB and 2.2PB. The high-end VNX 7600 and 8000 house 1,000 drives, although the 8000 will increase to 1,500 drives for maximum capacities of 3PB and 4.5PB.