Upcoming, faster solid state storage technologies will bring changes to IT hardware architectures and force a rewriting of software to take advantage of them.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Technologies such as Intel/Micron’s 3D XPoint and HPE’s Memristor – 1,000x faster than the current generation of NAND flash – will be used as cache memory. That means they will reside between RAM and flash storage and be accessed at byte level by applications that will need to be rewritten to do so where they now access back-end storage.
“We are talking two Olympics away, so seven or eight years until commercial products are available,” he said.
McDonald said flash as it is currently often sold, in a form factor that mimics that of traditional hard drives and requires storage to travel via the SCSI software stack, is inappropriate for these upcoming persistent memory technologies.
“You put in SSDs and its software stack and find SCSI communication takes up microseconds of compute time,” he said. “So, if the software stack takes up half the latency, you need to get rid of it, but you can’t with SSD technology.”
Read more on solid state technologies
- Flash storage has taken the enterprise by storm, but its days are numbered due to an unfavourable combination of technological obstacles and manufacturing economics.
- Flash storage will hit a wall in development terms by around the end of the decade. But which successor is best placed to replace it: Memristor, MRAM or phase-change memory?
Persistent memory would be byte addressable – say, as memory is able to change merely the bits representing characters in a database record – rather than having to address blocks of kilobyte sizes in storage via an I/O software stack.
Also, currently, operating systems use context switching to move between tasks as one task engages in I/O to/from storage. But if sufficient persistent memory were moved to a layer above storage, and with latencies of nanoseconds, context switching would not be required.
For all these reasons, applications and operating systems would need to be rewritten to deal with the new architecture, said McDonald. But he added that persistent memory would need to be used as cache and near to the CPU because “more than a few feet away starts to place serious latency limits”.
The SNIA and NetApp man said a future datacentre of, say, 50PB would comprise 5PB of flash and 0.5PB of persistent memory in a tiered architecture that would comprise DRAM, persistent memory, flash, spinning disk and Glacier/tape.
3D XPoint technology explained