Sergey Nivens - Fotolia

IBM DeepFlash 150 latest in all-flash aimed at file, object and analytics use cases

IBM’s DeepFlash 150 adds to the emerging trend of all-flash storage aimed at file, object and analytics use cases with option for petabyte-scale storage using IBM Spectrum Scale

IBM has launched its latest all-flash array – the IBM DeepFlash 150 – aimed at unstructured data, big data analytics and social media and mobile content.

The DeepFlash 150 comes in a base unit of 3U with capacities of 128TB, 256TB or 512TB. It doesn’t come cheap, however, with prices ranging from $550,000 to $2m for those capacity points.

The product can come as SAN-like shared storage with no bells or whistles – dubbed a JBOF (Just a Bunch of Flash) – or can be deployed in conjunction with IBM’s Spectrum Scale (formerly GPFS) parallel file system.

This will provide scale-out NAS file access capacity plus snapshots, replication, compression and encryption. When deployed with Spectrum Scale, cluster capacities can go up to petabyte-scale.

Each node comes with four 12Gbps SAS connections as control and host server interfaces.

To date, all-flash array products have largely been aimed at accelerating structured data for applications and databases. DeepFlash 150 adds to a more recent trend of making the performance of all-flash available to file and object workloads, such as with Pure Storage’s FlashBlade.

IBM said DeepFlash is well-suited to in-memory analytics use cases such as SAP Hana.

Such as with FlashBlade, DeepFlash 150 uses 8TB custom flash cards of which 16, 32 or 64 are provided to create units of each capacity. The DeepFlash 150’s cards come from Sandisk and are MLC flash. IBM indicated that it may move to lower cost TLC in the future.

DeepFlash is not expected to compete with the low latency performance of IBM’s all-flash FlashSystem arrays, but it is designed to be a step up from use of spinning disk HDDs for unstructured data and analytics use cases.

Read more on Storage management and strategy