IBM fleshes out flash and looks to the next solid state technologies

Feature

IBM fleshes out flash and looks to the next solid state technologies

Contents

  • IBM's flash roadmap
  • TMS arrays
  • TLC technology
  • NAND flash
  • New technologies

IBM recently announced its roadmap for flash and a $1bn investment in its flash and solid state business.

The roadmap includes building more features into the all-flash array devices acquired from Texas Memory Systems (TMS), as well as ditching the single level cell (SLC) flash variants. It also includes the addition of flash to IBM’s existing V7000, DS and XIV arrays.

But the company is already looking beyond flash to the technologies that will come after. Indeed, flash is something of a “flash in the pan”, according to IBM virtual storage performance architect and "master inventor" Barry Whyte.

IBM’s flash roadmap

IBM set out the main planks of its flash roadmap in early April, when it revealed that the TMS RamSan flash array family would be rebranded IBM FlashSystem and announced that flash would be built into hardware products – such as its V7000, DS and XIV, which will be offered as hybrid flash arrays – as well as its SONAS clustered NAS and ProtecTier data deduplication hardware, along with a planned big data analytics appliance.

Big Blue also announced that it would build 12 flash centres of competency so that customers can build proof-of-concept configurations.

TMS arrays will get more functionality and lose SLC

This week, IBM’s Barry Whyte put some more flesh on the bones of those announcements.

Chief among these is that the ex-TMS arrays will be reworked to provide advanced storage functionality, he said.

“The TMS arrays are pure boxes of flash so they lack functionality. At the moment, the main thing is to add SVC [Storage Volume Controller – IBM’s storage virtualisation box] to give snapshots, replication, tiering, etc. So, moving forward, we will see the integration of software intelligence into the hardware of the former TMS boxes.”

Whyte also indicated that IBM would drop the former-TMS single level cell (SLC) flash boxes. SLC is the best-performing of the flash technologies. Its use of one voltage switch per memory cell provides the best endurance and is the least complex to manage in the system software. It is also the most expensive, however, and is being superceded by multi-level cell flash combined with software to manage erase/write issues and therefore endurance.

“When we acquired TMS, it had the 700 SLC range and the 800 MLC range – we will see the SLC products dropped as we move forward," he said. "We could have developed our own [flash array] products, but TMS was floating itself for acquisition and we inherited what it was doing. We will see new versions of the TMS products.”

What about TLC?

While MLC is superceding SLC in the market, another flash technology is edging into view, namely triple level cell (TLC). As the name suggests, this packs even more bits into each cell and promises to bring down the price of flash per GB even further, towards that of disk.

Last year, Samsung Semiconductor started selling TLC flash for enterprise use, but specified that it is best suited to read-heavy workloads such as streaming media. Will IBM go down the TLC route? Whyte is not yet convinced.

NAND flash is sub-optimal. We have to work around its problems, so I don’t see flash having the lifespan of the HDD

Barry Whyte, IBM

“The problem there is that as you try to cram more voltage ranges into a cell you get less endurance. So to meet the type of three to five-year warranties being offered, you’d need to monitor the write workloads, and that’s just going to create problems with customers," he said.

"So, at the moment, two-level cell [such as MLC] is where we’re at. But if the manufacturers can guarantee write performance, we’ll move that way,” he said.

The problem with NAND flash

Despite IBM’s hitching of its wagon to the flash train, the company is already looking beyond NAND flash technologies. For Whyte, NAND flash has inbuilt issues that make it less than fully suitable for enterprise use.

Key among these are the effects of the economics of flash chip production. The vast bulk of flash production is for the mobile device market, which has two chief characteristics. On the one hand is the usage profile in such devices, which is effectively write once, read many (WORM). That means much less work has to be done to orchestrate the writing process than is the case in storage array use cases.

“It’s written to infrequently and mostly read from then onwards. It’s not a problem for the consumer product market, but it’s the opposite of what you want in an enterprise-class drive,” said Whyte.

There is also the trend to ever smaller flash chip form factors for mobile devices, which means fabrication plants are geared towards producing these. That is not a good thing for the enterprise storage market, said Whyte, because it means array software must work even harder to overcome the inherent problems of switching voltages and the background work necessary to ensure clean operation at such small scales.

“The chip fabrication plants follow the consumer market and move towards smaller and smaller chip form factors. We’d actually be better with, for example, 45nm chips for enterprise use than the sub-20nm ones the industry is moving to. To use them for enterprise-class workloads you have to do more work in software to ensure durability and a 300GB flash drive actually has to have around 600GB of capacity to compensate for this,” he said.

“NAND flash is sub-optimal. We have to work around its problems, so I don’t see flash having the lifespan of the hard disk drive (HDD), which has been with us around 50 years so far. It will be replaced by, for example, phase memory, memristor and the like.”

New solid state technologies years away

While pointing out the shortcomings of flash, Whyte looks towards the technologies that will supercede it, but they are some way off.

There is IBM’s phase change and Racetrack memory and the memristor, which is being worked on by Hewlett-Packard, among others.

There is also atomic-scale memory, which promises a byte of storage in just 96 atoms, compared with the half a billion used by a contemporary hard drive, but which has to operate at -268 degrees.

So, these technologies are a long way off – five to ten years for the likes of phase change, memristor and racetrack memory; more than a decade for atomic-scale storage.

For now, we are stuck with flash and its shortcomings, and as the big storage suppliers bring flash further into the fold, those hurdles will be overcome to some extent – you can bet on it. IBM has, with a $1bn stake.


Email Alerts

Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

This was first published in April 2013

 

COMMENTS powered by Disqus  //  Commenting policy