Just how far can the areal density of hard disk drives be increased?
To answer that question we must first imagine our way onto the surface of a hard disk drive and examine the fundamental features that determine its areal density.
The first concept to understand is that hard disk drive technology is all about edges and transitions. On the storage media there are areas where there is no stored data and areas where there is. All storage devices rely on the edges, the transitions between these areas, to work.
As an exercise in examining what determines hard drive areal density, let us imagine a device that stores two bits and so has four areas on its media surface. First of all, there is an empty area which never stores data. Then there is the first data bit, then another empty area and finally the second data bit.
We can envisage media that passes under a head or a head that passes over media or both. It doesn't matter for this discussion; we'll imagine moving media and a motionless head.
The head needs to detect stored data; binary ones or zeros are stored in the media in some way. How does it do that?
Irrespective of the recording technology and the data encoding method, the head detects edges, transitions, where an empty region starts and where stored data begins.
Let's pass the media we have imagined under our head and see what it detects. First it detects nothing, then a data bit, then nothing, then a data bit and then nothing again. How does it detect a data bit? It looks for some signal from the recording medium and, as the medium passes beneath the head, that signal state changes from nothing to something. The edge is what counts: the nothing-to-something edge. How big does the edge have to be and how much of an edge does it have to be?
Imagine the recording medium as physical terrain: a hill and then a valley, with the binary one the hill and the binary zero the valley bottom. The medium passes under the head and it detects nothing, and then either a dip, a cliff or slope down to a valley bottom; or a rise, a cliff or upwards slope to hill. Once this change in state is detected, there is no need for any more valley or hill. What's important is not the size or extent of the valley or hill but its onset, the edge.
Storage technology advances, meaning the storing of more data in less space – i.e., its areal density – are all about shrinking the extent of the conceptual valley or hill and cramming in more edges. With longitudinal recording methods on hard disk drives, the bits were laid out like little oblongs; coffins strung out end-to-end along a disk track. The brilliance of perpendicular magnetic recording (PMR) was to flip those oblongs up on their end and bury them in the recording surface such that only one end was detectable by the read head.
But that was all that was needed because it's the edge that matters. The buried mass of each PMR bit provided the sufficiency of magnetic material, grains, to make the recorded bit stable.
The only way to advance areal density again is to shrink the bits while retaining the edges and make the buried PMR coffins smaller. But then they become susceptible to influence from the neighbouring bits, which are now closer, so bit values could change or edge definition degrade. Random temperature fluctuations could have the same effect. PMR will probably stop being a usable technology as we approach and go past 1TB/sq in areal density.
Ways to combat this and preserve edge stability include changing the medium so that bits need a high temperature for their magnetic state to be flipped. This is heat-assisted magnetic recording (HAMR). But the technology obstacles are large, with requirements for a new recording media formulation and manufacturing process, plus new read/write head technology with a laser added to heat the bits. In addition, the bits must be located precisely, more precisely than ever before, and the laser’s firing and heating effect has to be near instantaneous so as not to prolong the data bit write process.
Another way to preserve bit edge definition is to cease reliance on the random distribution of grains in the recording medium and pattern them with a set amount of magnetic material somehow, using some kind of nano-scale guided media building process to create the desired effect. Such smaller bits might then need an insulating ring to prevent unwanted interference from the neighbouring bits. It's easy to see that such bit-patterned media (BPM) is also going to be very difficult to devise and manufacture.
At least the read/write head designers could then concentrate on reading and writing the smaller bits without having to add heating technology to the head.
Once we get to the post-PMR technology that will take us far beyond today's 625 GB/sq in areal density and up into 5 TB/sq in and beyond, another problem rears its head. Each read/write head has to look for signals in an ever-greater disk area. It is as if what was once a football pitch is now an airfield with vastly more data in it.
The problem will then be a surfeit of edges, and detecting them fast enough to keep disk I/O rates up.
Can more read/write heads be added to drives, more than the one-head-per-platter surface we have at present? It would add expense, and mechanical reliability would most likely suffer as the component count went up. The traditional answer to this I/O density problem is to shrink the media size and increase the hard disk drive count in storage arrays.
So, 5.25-inch drives gave way to 3.5-inch ones, and we're currently in a transition to 2.5-inch disk drives. Toshiba thinks we might see 4 TB, two-platter, 2.5-inch drives in 2016. But this reduction in physical HDD size cannot continue. Flash solid state drives (SSD) will likely stop 1.8-inch disk drives from Toshiba in their tracks and threaten 2.5-inch disks longer term. How will the HDD industry retain its edge then? No one knows.
Chris Mellor is storage editor with The Register.
This was first published in July 2011