fotohansel - Fotolia
As businesses develop their infrastructure and applications, they find that storage is frequently the main bottleneck affecting the customer experience.
Traditional storage input/output (I/O) is insufficient for the fast processors and networks being deployed. Flash storage has become popular because it is typically three to 10 times faster than disk, so the most demanding applications now overwhelmingly use flash storage.
Storage remains a rapidly growing market, fuelled by a hyperbolic expansion in data and a flood of new applications that analyse these massive data collections. These applications are straining under the intense processing burden. As demands continue to increase, even flash is proving too slow.
We need new technologies and new architectures to keep ahead of this unrelenting surge.
Storage-class memory is next ‘arms race’
With demand for speed rising quickly, storage systems are under pressure to become faster, denser and cheaper while consuming less power. In-memory processing – with its apparently insatiable appetite for memory – is also causing a collision between the desire for ever-increasing memory configurations and the cost and power requirements of dynamic random access memory (DRAM).
These developments combine to form a near-perfect storm for the adoption of storage-class memory (SCM) – the new buzzword for various flavours of next-generation non-volatile memory (NVM).
Early movers delivering products that feature the next generation of persistent memory technology will earn billions of pounds per year, starting a few years from now, and the stakes will rise over time. SCM technology will have multiple insertion points into the ecosystem, all with high value propositions.
In rough order of their disruptive impact, the three major use cases for this technology are:
- The next generation of flash replacement technology will revolutionise external storage.
- NVM extensions to DRAM capacity will catalyse larger in-memory processing.
- Persistent memory semantics can revolutionise system architectures.
As good as current flash technology is for performance – and, increasingly, for price too – it is still about four decimal orders of magnitude slower than DRAM. All of the potential emerging SCM technologies offer the opportunity to lop off at least a couple of those decimal places.
A new class of solid-state memory that offers 10 to 100 times the performance of current flash will have a ready market for those wishing to keep their traditional storage architectures and simply boost their performance. Since most flash development is focused on improving density and cost rather than speed, these new technologies threaten many future flash prospects as well. With such a performance boost, new SCM can command a price premium over flash, but it must be cheaper than DRAM. Intel’s 3D XPoint, for example, will almost certainly see its first incarnation as a faster substitute for current flash storage in either SSD, NVMe or PCI-connected form factors.
A more exciting application is the mounting of SCM on system memory dual in-line memory modules (DIMMs) and accessing it with memory semantics through the onboard system memory controller, thereby expanding the amount of addressable memory in a system. This obviously implies changes in the DIMMs and additional firmware and possibly operating system-level software.
While difficult, the reward here is truly transformational, and multiple vendors are pursuing this technology. Today’s systems get very expensive and consume excessive power when they are configured with large DRAM. Current analytics workloads – and especially future applications of cognitive computing and an exploding internet of things movement — require huge amounts of in-memory processing. With its promised combination of high density and low power, SCM will answer these demands nicely.
Read more about next generation storage
We look at the key performance attributes of storage media in use today, and the choices and basic tuning steps you can take to get the most from your storage.
We look at queue depth and fan-out and fan-in ratios. With NVMe they could become a thing of the past, but for now there’s still a bottleneck at the storage array controller.
However, this use case presents some notable challenges that the industry must overcome, and soon. The required system-level changes, which only major players in the ecosystem can address, remain a barrier for some SCM technologies, but these changes are coming. Intel is widely rumoured to be preparing to introduce 3D XPoint on DIMMs with its Purley server platform in 2017, and it is likely that other players will try to introduce compatible competitive products. Samsung’s offering is likely to be based on an evolution of conventional flash, as is likely for the first iteration of SanDisk’s technology.
The long and twisty highway that led us to today’s computer industry is littered with the wreckage of companies offering truly breakthrough technology that required “only a slight software change” to work. None of the current players will make the same mistake. Solutions coming to market will have identical semantics to DRAM and will not require any code changes to work, requiring, at most, a system-level driver equivalent that will come with the system firmware package. First to market will likely be Diablo Technologies’ Memory1, which uses system-level firmware to allow on-DIMM flash to be accessed with memory semantics.
The end game for an SCM DRAM replacement would be something that is at least as fast as DRAM that would not require the intermediation of firmware and combination with DRAM cache to achieve DRAM performance. Right now, the only viable technologies in this class on any credible trajectory to real-world SCM products are Nantero’s carbon nanotube memory and several flavours of resistive RAM, which have been demonstrated at various stages of prototype development. Memristors would also be a good match if they can ever be produced in volume.
Merging memory and storage tiers
The horizon use case for SCM is to allow persistent memory semantics in the main system to address space. Having the ability to allow software to treat a portion of the system to address space as persistent has the potential to totally change the architectures supporting a wide range of problems. The eventual result will be computing systems that have only a single tier of storage – merged DRAM and SCM for both dynamic and persistent data – with the file or object-level tier completely disappearing. The resultant software will be greatly simplified and costs reduced as a whole layer of architecture – the external storage layer and its attendant network, complexity, and most of its management cost – vanishes.
This transition will probably be an evolution over a decade in the making, involving changes in systems design, applications, operating systems and, most importantly, the architectural thinking of generations of practitioners.
This is an edited extract from the Forrester report, “Brief: Storage wars will deliver extreme speed sooner than expected”, by Richard Fichera.