All-flash arrays have been the hot topic of the storage world for the past couple of years. As customers have virtualised servers and sought ever-increasing performance from business-critical transactional systems, storage has had to keep up. Flash is how it has done it.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
While flash is most often found alongside spinning disk in hybrid arrays, the all-flash array has been a driving force in the market.
In turn, the all-flash array has been driven in terms of development and acquisition by the startups that designed flash arrays from the ground up, optimised to the needs of solid-state storage.
It is those flash array startups we study here.
What they have in common is that their products have been designed with flash in mind. This means performance that is often measured in sub-millisecond latency and IOPS in the hundreds of thousands and into the millions at higher storage capacities.
It means controllers with high-performing CPUs, plus operating systems (OS) and on-board firmware that is built to allow for the speed and peculiarities of flash; its variable voltages, garbage collection, wear levelling and other operations that need to be carried out.
Where the array makers differ is in terms of capacities offered, type of access (file or block), connectivity to other storage nodes and to hosts, scale-up or scale-out capability, as well as in advanced storage features such as thin provisioning, replication and snapshots.
More on flash storage
This Computer Weekly guide to flash array startups is a starting point for those considering investment in all-flash systems to see the key suppliers and their products compared.
In the Kaminario scheme, up to eight K-Block nodes form a cluster. Each node has a 2U controller and customers can add expansion shelves that house 24 400GB or 800GB multi-level cell (MLC) flash drives.
Kaminario K2 uses commodity hardware and its Spear (scale-out performance and resilient architecture) OS to build its all-flash product. Kaminario says it spreads writes around and has a write-buffer to prevent hotspots and a claimed seven-year flash lifespan.
Earlier this year, K2 version 5 added thin provisioning, inline deduplication and inline compression. Meanwhile, scale-up capability was added with the potential to add storage media without adding compute, and capacities go up to 720TB with IOPS in the millions at higher volumes of storage.
K-Raid is Kaminario's version of Raid 6 that provides for up to three drive failures per shelf without data loss. Connnectivity between nodes is InfiniBand with host connectivity via 16Gbps Fibre Channel and 10 Gigabit Ethernet (GbE) ports for iSCSI.
Kaminario plans to add replication and data encryption by the end of 2014.
Nimbus launched its unified storage (block and file access) Gemini F400 and F600 in August 2013. These are 2U boxes that hold up to 48TB of raw MLC flash.
Connectivity options distinguish the two lines. The F400 has eight small form-factor pluggable (SFP) ports, along with 16Gbps Fibre Channel and 10GbE connections. The F600 has eight quad SFP ports to support 56Gbps InfiniBand and 40GbE.
The F400 and F600 arrays are available with single or dual active-active hot swappable controllers.
Nimbus tackles the flash wear issue with software that turns random small writes into sequential writes to reduce write amplification. It also uses cell-level wear-levelling algorithms to eliminate hotspots.
In February 2014, Nimbus added the Gemini X series of arrays with 96TB of raw capacity in one box that can scale to around 1PB in a cluster, making it the largest-capacity system on the market built for flash.
Nimbus Gemini X uses the same HALO operating system as the F series, but uses a different architecture. F series products comprise single independent nodes, while the X series is made up of flash directors that manage storage across clusters of flash nodes.
Gemini X96 and X48 flash nodes support 96TB or 48TB of capacity respectively. Customers can cluster up to 10 arrays for 960TB of flash capacity. Each node adds 400,000 IOPS to the cluster.
Pure Storage has had a good couple of years, with $150m Series E funding in 2013 and $225m earlier in 2014. It appears to be following a strategy of holding off from an initial public offering (IPO) until it can hold its own against the big six opposition, such as that from EMC’s XtremIO array.
The Pure Storage product family comprises the entry-level FA-405, high-end FA-450 and the FA-420 in between. All use inline deduplication to boost effective capacity, claiming customers can get up to 125TB of usable capacity from the FA-420’s raw 23TB, for example.
The FA-405 is aimed at SMEs and customers that want flash for limited projects such as virtual desktop infrastructure (VDI) pilots. It has a raw capacity of 11TB, which Pure says provides useable capacity of around 40TB, while the FA-420 scales to 125TB. The Intel Ivy Bridge CPU FA-450 has 70TB of raw capacity and up to 250TB useable, and also offers 16Gbps of Fibre Channel connectivity.
Customers can scale the devices, to turn an FA-405 into an FA-420 and FA-450.
In 2014, all Pure products gained a controller hardware upgrade with replication included in the Purity OS, version 4.0.
Startup Skyera entered the flash array market in 2012 with a series of iSCSI- and NFS-connected arrays that use MLC flash in conjunction with data deduplication and compression.
Its skyHawk products comprise three models, with 12TB, 22TB or 44TB of raw capacity, and all coming in a 1U half-height form factor.
SkyHawk arrays use a custom Asic (application-specific integrated circuit) , which is driven by Skyera’s SCOS operating system (OS). That sits above so-called storage blades comprising commodity MLC flash dies (pictured above).
Later in 2014, Skyera plans to launch skyEagle, which will scale to 650TB and include Fibre Channel access. SkyEagle will also include snapshots, replication and high availability, features that are missing from skyHawk.
Tegile Systems is primarily a hybrid flash array maker with some data acceleration smarts. But, the company has added an all-flash component to its line-up of HA-prefixed hybrid flash multiprotocol storage (block and file access) arrays – the T3800.
The T3800 is a 2U box that starts with 48TB of raw flash in SanDisk 2TB eMLC drives. It offers 350,000 IOPS and scales to an effective (ie, with five times data reduction) capacity of nearly 1.7PB when two Tegile 4U expansion boxes are added (336TB raw).
Tegile uses flash as cache and an OS based on ZFS that has been tweaked to provide data deduplication; compression; Raid enhancements; and a performance-boosting feature called Metadata Accelerated Storage System (Mass). Mass deals with data via metadata headers rather than the full copy, and these are kept in cache or SSD tiers.
Advanced storage services include data deduplication, compression, thin provisioning, snapshots and remote replication.
SolidFire targets cloud service providers with its Fibre Channel and iSCSI block storage. With cloud in mind, it has automation and multi-tenancy functionality. Administrators can assign storage volumes with different characteristics to different customers.
Solidfire arrays are designed to accommodate a range of workloads, not just those that require high performance. Data deduplication, compression and thin provisioning are built in to help lower the cost per gigabyte for operations outside of Tier 1 or Tier 0.
SolidFire uses MLC flash drives and scales in 1U nodes that go from four up to 100, with I/O performance reaching up to five million IOPS.
SolidFire’s all-flash array product lines are differentiated by drive size and capacity. The SF3010 uses 300GB flash drives and has 3TB of capacity, the SF6010 uses 600GB drives and 6TB of memory, and the SF9010 uses 960GB drives and about 96TB.
In September 2014, SolidFire added the 2405 and 4805 arrays in a move aimed at enterprise customers and away from its cloud provider focus. The 2405 and 4805 both come in 1U form factor nodes with 50,000 IOPS per node, like most of its other arrays (the 75,000 IOPS 9010 being the only exception).
The 3010 and 6010 are set to be phased out by the end of 2014.
SolidFire’s Element operating system version 6 – codenamed Carbon – added heterogeneous clustering between SolidFire arrays, real-time replication and backup and restore integrated with cloud management platforms.
In September 2013, Violin was, according to Gartner, the market leader in all-flash arrays, with a 19% share. Computer Weekly has no up-to-date figures on that, but the year since has been a rough one for Violin.
The trigger was its IPO in September 2013, in which the company raised $162m, selling shares for $9 initially, but seeing them fall to $7 on the same day and to $3 by the end of November 2013. Investors are thought to have been scared off by how quickly Violin had used its cash reserves.
But since the start of 2014, Violin has gained a new CEO and sold off its PCIe flash card business. In June 2014, it upgraded its Concerto all-flash arrays – re-branding the product the Concerto 7000 series – with the addition of advanced storage functionality.
It added synchronous replication to 100km; asynchronous replication; WAN optimisation, including data deduplication, compression and bandwidth throttling; snapshots, LUN mirroring; continuous data protection (CDP), backup app integration; and thin provisioning.
Violin’s 6000 series flash arrays, which form the hardware base of the Concerto 7000, come with MLC or SLC (single-level cell) options. Useable capacities range from 11TB to 44TB in 3U units and claimed 4k 70/30 read/write IOPS spanning 600,000 to 900,000 (MLC models) and 450,000 to 1,000,000 (SLC). Connectivity is Fibre Channel, iSCSI or Infiniband.
Violin also makes the Windows Flash Array NAS flash-powered box with Microsoft Windows Storage Server NAS software.
Whiptail was bought by Cisco in late 2013 for $415m and speculation was rife about how the networking behemoth would slot the all-flash array into its product ranges.
One line of thought was that the acquisition of Whiptail would create friction with its storage partner EMC in the vBlock alliance that sells converged storage/server/networking products, as well as with NetApp with which it partners to sell FlexPod converged stacks.
And that seems to be coming to pass, with Cisco sales teams selling Whiptail as standalone storage and bundled with Cisco UCS servers.
The 2U Accela can scale to 24TB and achieve 250,000 write IOPS. Invicta scales up to 144TB, and is made up of 1U storage routers plus 2U storage nodes. Capacity and performance scale by adding storage nodes. Invicta Infinity uses Racerunner OS version 5 (the other products run v4.5) and scales via 30 nodes to 720TB and up to four million IOPS.
Connectivity is Fibre Channel, Ethernet or InfiniBand, and Whiptail nodes connect to each other through InfiniBand for high availability.