Fotolia

3D XPoint et al to force architecture and software changes

New solid state technologies such as 3D XPoint will be a second tier of memory, bringing change to hardware architectures and software to be rewritten, says SNIA solid state group chair

Upcoming, faster solid state storage technologies will bring changes to IT hardware architectures and force a rewriting of software to take advantage of them.

Technologies such as Intel/Micron’s 3D XPoint and HPE’s Memristor – 1,000x faster than the current generation of NAND flash – will be used as cache memory. That means they will reside between RAM and flash storage and be accessed at byte level by applications that will need to be rewritten to do so where they now access back-end storage.

Those are the views of Alex McDonald of NetApp’s CTO’s office and solid state storage initiative co-chair of the Storage Networking Industry Association (SNIA).

“We are talking two Olympics away, so seven or eight years until commercial products are available,” he said.

“What we’re talking about is ‘persistent memory’, 1,000x slower than DRAM but 1,000x faster than flash, which, in turn, is 1,000x faster than spinning disk.”

McDonald said flash as it is currently often sold, in a form factor that mimics that of traditional hard drives and requires storage to travel via the SCSI software stack, is inappropriate for these upcoming persistent memory technologies.

“You put in SSDs and its software stack and find SCSI communication takes up microseconds of compute time,” he said. “So, if the software stack takes up half the latency, you need to get rid of it, but you can’t with SSD technology.”

Read more on solid state technologies

  • Flash storage has taken the enterprise by storm, but its days are numbered due to an unfavourable combination of technological obstacles and manufacturing economics.
  • Flash storage will hit a wall in development terms by around the end of the decade. But which successor is best placed to replace it: Memristor, MRAM or phase-change memory?

Persistent memory would be byte addressable – say, as memory is able to change merely the bits representing characters in a database record – rather than having to address blocks of kilobyte sizes in storage via an I/O software stack.

Also, currently, operating systems use context switching to move between tasks as one task engages in I/O to/from storage. But if sufficient persistent memory were moved to a layer above storage, and with latencies of nanoseconds, context switching would not be required.

For all these reasons, applications and operating systems would need to be rewritten to deal with the new architecture, said McDonald. But he added that persistent memory would need to be used as cache and near to the CPU because “more than a few feet away starts to place serious latency limits”.

The SNIA and NetApp man said a future datacentre of, say, 50PB would comprise 5PB of flash and 0.5PB of persistent memory in a tiered architecture that would comprise DRAM, persistent memory, flash, spinning disk and Glacier/tape.

CW+

Features

Enjoy the benefits of CW+ membership, learn more and join.

Read more on Flash storage and solid-state drives (SSDs)

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchCIO

SearchSecurity

SearchNetworking

SearchDataCenter

SearchDataManagement

Close