“Back in the good old days they tried to figure out how to pool disk to make bigger disks and do some disaster recovery,” says Ideas International Analyst Chris Ober. “They came up with a concept of various RAID levels, one through five.”
Some of those levels, he says, quickly proved impractical.
“RAID 2 got dropped straight away,” Ober says. “Then it turned out that RAID 3 and RAID 4 were quite difficult to implement. You had to match the speeds of all the drives in the array.” Ober says storage administrators therefore settled on RAID levels that offered the protection they needed, but also imposed fewer dull chores and costs.
“RAID 0, 1 and 5 took off,” he says.
While those RAID levels remain popular, Ober feels storage professionals’ ardour for them may be waning.
“All of those original levels of RAID gave you one drive protection,” he says. “But now that disks are getting larger and larger, the time needed to rebuild a drive has increased. In the world of one terabyte drives, you are looking at tens of hours to rebuild a disk.”
“That is where RAID 6 comes in,” as this level of RAID allows a system to continue operations even if two disks experience physical failure, making rebuild times less important.
Neil Cameron, a Field Service Engineer at RAID card vendor Adaptec also sees RAID 6 on the rise.
“RAID 6 is in demand because of the confirmed unreliability of SATA drives,” he says. “It was too expensive to implement with SAS or SCSI drives,” but the low cost of SATA drives means RAID 6’s ability to survive failures makes it more viable than it was in the past.
John Martin, a Consulting Systems Engineer at NetApp agrees that the fragility of SATA drives is boosting RAID 6.
“We found SATA drives failed early when doing lots of random reads and writes, therefore you have to be using double parity RAID,” a feature RAID 6 possesses and which makes this RAID level viable for applications in which data is accessed infrequently, such as virtual tape libraries and arrays used as medium-term archives.
Another uncommon RAID level that is increasingly popular is RAID 4, which is the mainstay of NetApp’s products.
“The reason most people do not use RAID 4 is that it needs a dedicated parity disk,” says the company’s Martin. “If you use RAID 4, the parity disk gets hot and degrades the performance of the RAIDset. But our file system can handle it – the parity disk is the least busy disk in the system.”
Other uncommon RAID levels, or RAID-like ideas, are also beginning to emerge.
“IBM XIV is implementing proprietary mirroring,” says Ideas International’s Ober. “Instead of mirroring a small set of disks it mirrors across every other disk in the other half of the RAID box. They will put 1% of drive one on drive three and drive four and the other drives in an array. Instead of having to build one new drive, you can copy 50 drives at 1% of capacity each. IBM is quoting rebuild times of less than an hour.”
Such schemes, he adds, are needed as storage scales.
“When RAID was invented it was for systems with eight or 16 drives. Now we need to look at how RAID works when there are hundreds or thousands of drives in a system.”
Ober is uncertain if current RAID levels and ideas are applicable to the reality of these systems, or the current trend for clustered storage systems. He therefore feels that change is coming to RAID.
“It is really going to come down to the type of users,” he says. “Most will want adequate performance but there is always the high end which wants solutions for different types of data.”
Those different solutions, he believes, will see users mix solid state disks with fibre channel, SAS and SATA drives, then assign data to each technology to meet their business needs. RAID cannot, at present, offer a lot of help to organisations undertaking that process and
“Where RAID needs to go is to get the balance right between the technologies available and users’ needs.”