Part One: Solid state disk technology explained.
The reliablity of solid state disks has improved markedly in recent years.
Flash memory has, in the past, had a frustratingly finite lifecycle, as each cell can only be written to so many times before the semiconductors lose the physical properties that make it possible for them to store data.
Improvements to the technology are extending the devices’ working lives to levels that compare comfortably with other types of drive. “The thing we do on the SSDs is spread the writes across the disk,” says Intel’s Casey. Modern SSDs also feature a technology called “wear levelling” that ensures the same cells are not written to all the time.
“We are starting to see 2million hours mean time between failure,” Casey says. “We guarantee a million cycles of the cell – but a drive could produce five million. That translates into - 350GB/day for five years for enterprise class SSDs and 20GB/day for five years in drives for consumers.”
SSDs’ most visible role to date has been in laptop computers like the Macbook Air and some Lenovo, ASUS and Toshiba models. Used in this way, SSDs lower laptops’ weight and boost their battery life, two desirable outcomes. SSDs are expected to become standard issue on laptops as soon as pricing permits.
In the enterprise, SSDs are expected to speed applications by taking on the role currently performed by fiber channel drives in enterprise arrays.
“The problem with physical disks is that the head has to fly over the platter, find the data, then read it off,” says EMC Australia’s Clive Gold. SSDs do not have that physical chore to achieve and therefore deliver data to their I/O bus more quickly than even the fastest conventional drives.
“SATA I/O is a third the speed of fiber channel,” Gold says. “Flash is thirty times as fast as fiber channel.”
SSDs will therefore, he believes, become the natural home for data that businesses know must be accessed quickly. The overwhelming majority of data will be stored on conventional disk, which will remain a viable technology for data like video that does not ask a hard disk to do a lot of physical work. But the data that applications request most often, and to which users are most sensitive to response times, will be placed on a tier of SSD that sits in the same array as conventional disk to ensure that applications can retrieve it and present it to users (or to other computing devices) at pleasing speed.
Body: Bringing this scenario to reality is not, however, as simple as connecting SSDs to an enterprise array and sitting back to enjoy the speed boost. Vendors of enterprise arrays carefully vet the drives they say will work in their machines, as different SSD manufacturers have different ways of processing I/O.
Sun Microsystems Asia Pacific CTO Angus MacDonald believes SSDs have another role: replacing direct-attach storage in servers, to much the same end as the scenario that sees SSDs placed in storage arrays
But in either scenario, SSDs are likely to strains arrays, servers and networks.
“SSD is so fast, it keeps responding and hogs resources. This creates a challenge for the way things like caching get done,” says EMC’s Gold.
“You can only get eight to ten SSDs into a chassis before you saturate the cache,” adds Sun’s MacDonald.
Gold believes that EMC has applied its experience building large arrays to the problem of coping with SSDs’ massive output and that its quality of service and other controls make SSDs usable today.
Sun’s MacDonald says the company has similar tools in the short-term pipeline, but is also working to help would-be SSD users manage the challenge of automatically figuring out which data belongs on the new, fast, SSD tier. MacDonald says his company is working to making this classification a function of it ZFS file system. “A definite direction for ZFS is getting an inherent understanding of what tier data belongs on,” he says.
Other vendors are taking similar steps to make sure that SSDs can deliver for the enterprise.
And EMC’s Gold believes that soon, those technologies will be pressed into service in the mainstream.
“We have two or three years until fiber channel becomes obsolete,” he says.