Poor applications; they're being held up by slow storage and less-than-robust memory. If only you could have DRAM with the capacity of storage and storage with the speed of DRAM. IBM says you can, or rather you will, and the technology is called racetrack memory.
A server would have access to terabytes of nonvolatile and low-latency memory that could store entire working sets of data for applications. Disk drive arrays would become vast repositories for longer-term data storage, or maybe not; IBM reckons its storage-class memory would have the same cost per gigabyte as disk storage.
That's a huge assertion. How does it justify that?
IBM's Almaden research centre has been working on racetrack memory, which is built using nanowires, u-shaped permalloy wires IBM says are 1,000 times thinner than the hairs on our head. Magnetic “domains” move along these wires, and the magnetic polarity of their walls can be set by so-called pinning sites on the wire. Precisely controlled electrical currents can set the polarity of the domain wall and also cause domains to move predictably along the nanowire.
The predictability is needed because the domains, which carry the binary 1 or 0 information via their magnetic polarity, don't stay in one place; they move along the wire like racing cars along a track. To set or read such a mobile bit, you need to know where it will be along the wire. It's fiendishly complicated technology.
The researchers think they have this problem licked and are set to build a racetrack memory device that could be portable and hold a year's worth of movies, whatever that means. Anyway, suppose they can do it; suppose they can build racetrack memory modules that could store, say, 30 TB in a 3U rack unit.
Well, gee whiz; you can do that now with NAND flash. It costs a fortune, but it can be done. NAND is slower, in the latency sense, than DRAM, so let's give the Almaden people the benefit of any doubt and agree they could produce a 1U, 10 TB racetrack memory box that has DRAM latency. How could they do this at the same cost per gigabyte as disk drive arrays?
The build process would be a semiconductor one, generally like an existing DRAM or NAND fab, meaning it would cost several billion dollars to build the fab and it would need to produce millions of wafers with racetrack dies on them.
Bear that in mind while we note that there are competing technologies aiming to combine DRAM speed, flash nonvolatility and disk array capacity. Samsung is working on spin-transfer torque RAM (STT-RAM). Hynix and others are working on phase-change memory, and HP has its memory resistor (memristor) technology.
Bear that in mind while we ask a question: Why is NAND so expensive? Flash costs a lot to buy because it is not made in large enough quantities and because the number of bits per flash cell isn't high enough. Each flash cell currently has two bits at most, with three-bit flash coming next year.
How many bits will a racetrack nanowire hold? We don't know. It could be that a 3U racetrack device could store 150 TB, five times more than flash.
Are these numbers nonsense? IBM itself has said that an average transaction-driven data centre today uses approximately 1,250 racks of disk drive storage. It asserts that, in 2020, that data could be held in one rack--a staggering, breathtaking 1,250-to-1 reduction in space and increase in storage density. By that reckoning, a 150 TB, 3U rack shelf seems almost modest.
Such a dense nanowire would certainly lower the cost per gigabyte, but would IBM sell this stuff at the same cost per gigabyte as disk drive arrays? Surely not if its fabled characteristics come true and it does provide DRAM speed and flash nonvolatility.
IBM will surely price it up so it costs more than flash initially--you are getting more for your money, right? And more speed--so why on earth would IBM’s price economists consider it worth less than flash? Also, unless IBM licenses its racetrack IP, Big Blue will be the only supplier. Only its servers will have racetrack memory with its vast capacity, and so there will be an element of being locked into IBM and another one of monopolistic supply. Oops, we didn't anticipate that.
Will IBM license its racetrack IP to existing memory and NAND fab operators, to Hynix, Samsung, Micron, Toshiba and others? That would provide twin assurances of capacity and competition, meaning lower prices and openness. It would also, though, enable IBM competitors Cisco, Dell, HP, Intel and others, the server computer suppliers, to use the stuff as well and so compete with IBM's own servers.
If racetrack memory technology delivers the goods, IBM has some interesting strategic questions ahead of it. The endpoint of disk drive array cost-per-gigabyte levels looks unattainable unless IBM licenses its racetrack IP, and the semiconductor industry spends tens of billions of dollars to build the foundries needed to produce the gazillions of nanowire bits needed to keep prices per module low.
That also means that racetrack memory wins out over other technologies, such as STT-RAM, phase-change memory and memristor. The elements required to get us to 2020 and a 1,250-to-1 increase in storage density, DRAM speed and disk drive array economics are stacking up. Racetrack memory has lots of gee-whiz technology charisma, but don't bet your data centre farm on IBM delivering the goods on this one.
Chris Mellor is storage editor with The Register.