Chepko Danil - Fotolia
Nand flash as a technology has been around since the late 1980s, but it only really became adopted in enterprise storage in the past eight to 10 years.
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
The main modes of flash storage deployment have been to implement the technology in the form of either traditional drive format solid state disks or PCIe SSDs as an add-in-card in the PCI slot.
But, as the market for flash storage continues to develop, we have started to see a range of technologies and methods of deployment outside this mainstream – many of which could represent an interesting direction for storage in the coming years.
DSSD and Mangstor
The initial selling point of flash was raw speed, measured either as throughput (MBps) or IOPS (input/output operations per second), both delivered at low latency. The key numbers have typically been up to 1,000,000 IOPS and less than 1 millisecond of latency.
These performance figures seemed incredible when flash was introduced to the enterprise. There was a lot of talk in the industry about “over-delivering” input/output (I/O) capacity that couldn’t be fully exploited.
However, times change, and with the ongoing march of Moore’s Law, we have continued to see performance gains in processor and memory speeds. This has resulted in the need for a new breed of storage system – the ultra-fast appliance.
The ultra-fast systems, as developed by EMC DSSD and Mangstor, represent a niche market in which extremely high throughput is delivered at very low latency.
As an example, Mangstor claims latency figures as low as 110 microseconds (read) and 30 microseconds (write), at least an order of magnitude faster than today’s all-flash arrays. A single appliance can deliver up to 3 million IOPS.
EMC recently launched the so-called rack-scale flash DSSD D5, which offers 10 million IOPS with average latencies of around 100 microseconds and 144TB in 5U of rack space. A DSSD D5 supports up to 48 servers with a mesh of NVME PCIe gen 3 connectivity.
Read more about all-flash storage
- Computer Weekly surveys the startups and specialists in the all-flash array space and find a market in which advanced storage features are becoming the norm, while suppliers battle down to $1/GB.
- Computer Weekly surveys an all-flash array market in which the big six in storage have largely settled on strategy, but key new technologies – such as TLC flash and 3D Nand – are emerging.
The ability to achieve these figures requires both a bespoke architecture (DSSD has custom flash modules, Mangstor uses custom PCIe SSDs) and a new mode of connectivity.
These appliances don’t connect over Fibre Channel or Ethernet, but use low latency protocols such as NVMe over RDMA and PCIe. The sacrifice here is in flexibility and scalability.
These protocols have restrictions on cable distances and implement a more dedicated point-to-point network, much like with SCSI 20 years ago. In addition, the advanced features of traditional arrays are typically missing (including data protection) to achieve such high performance.
Ultra-fast appliances will see uses in finance and analytics – anywhere where the latency of an I/O request is critical. This makes it unlikely to be deployed by the typical IT organisation.
An alternative to building a faster array is to look at moving storage closer to the processor and cut the distance an I/O request needs to travel to persistent storage.
Companies such as Diablo Technologies and Netlist have introduced what is termed either Memory Channel Storage or Storage Class Memory, but is essentially Nand flash deployed on a Dimm form factor, identical to that used by server memory Dram.
Diablo partners with SanDisk to produce ULLtraDimm, which comes in either 200GB or 400GB capacities. Each device is capable of delivering up to 140,000 IOPS (read) and 44,000 IOPS (write) with write latencies as low as 5 microseconds. This is an order of magnitude faster than the ultra-fast appliances.
Unfortunately, NVDimm technology can’t simply be put into any server. The hardware Bios needs to be modified to support NVDIMM technology. Drivers are required in the operating system so the difference between volatile and non-volatile memory can be identified to the application.
The potential of the technology could be enormous. Dram is relatively expensive and NVDimm bridges the price performance gap between Dram and traditional flash. As an enabling technology, in-memory databases and hyper-convergence seem the most obvious workloads for NVDimms, where the server/node is treated as a unit of resilience (or failure).
The outliers in the use of flash aren’t all focused on high performance or low latency.
Cost is one determining factor in flash adoption and, in particular, the $/GB ratio that still determines many product purchases. One way in which costs have been reduced is by storing more pieces of information in each flash cell.
Initially SLC technology stored one bit of data, MLC stored two and TLC stored three. Toshiba is forecasting that QLC (quad-level or 4 bits per cell) technology will be available in 12 to 24 months, with the prospect of drives that can store up to 88TB of data each.
The downside to QLC and the progression to higher bit density per cell is the relative lifetime of the media, which gets progressively worse with the more bits that are stored in each cell.
However, with the right controller algorithms and the use of other flash products for write-intensive workloads, QLC could get to a point where the humble hard disk drive is replaced in all but the most long-term archive requirements. In fact, many IT departments may move to simply having flash and tape in their datacentre.
We’ve talked about performance and capacity in flash. Another area where there is outlier activity is in product form factor.
SSDs initially emulated the hard drive and were packaged in 3.5in and 2.5in formats. Alternatively, they were delivered as add-in cards that plugged into the PCIe bus.
M.2 is a different form factor altogether and has developed from the laptop market. It provides up to 512GB of flash on a device that can be as little as 1.5mm thick, 22mm wide and up 110mm long.
These products support PCIe and NVMe standards and might offer the ability to build a highly dense storage array that consumes relatively low power.
The challenge to flash
Of course not all future developments of storage are based around flash technology.
The imminent arrival of 3D-XPoint from Micron/Intel could change the dynamics of the storage industry entirely, offering 1,000 times the endurance and performance of flash with a density 10 times that of Dram.
If 3D-XPoint delivers at an acceptable price point, some of these outliers may be short lived or not see the light of day at all.