With the notable exception of RAID 0 -- which uses striping to improve performance but offers no redundancy -- RAID configurations either add parity bits on a separate drive or multiple drives, mirror data to a second drive, or both.
RAID 4 implements parity by using a dedicated drive to house all parity data, while RAID 5 distributes parity across all drives in the RAID group. RAID 4 was the first of these technologies to gain market traction and is the simpler of the two technologies to implement.
In RAID 4 technology, a zero or one is written to the parity drive at the end of every write operation to ensure that it conforms with the denoted parity (even or odd). Over time vendors noticed that the incidence of parity drive failures was much higher than that for data drives. That was because every write operation involved a write to the parity drive (unlike the data drives), leading to increased wear and tear on the parity drive.
To reduce the wear and tear that parity operations placed on a single drive, vendors introduced RAID 5 -- or distributed parity. RAID 5 uses the equivalent of a single drive for parity operations but distributes parity data among all drives in the RAID set. So, if the parity bit for the first drive in the RAID set is placed on the first drive, the parity bit for the second write will be on the second drive, the third on the third, and so on until it wraps around back on to the first drive.
Vendors noticed that RAID 5 caused fewer failures for parity drives and began to phase out RAID 4 in their arrays.
Both technologies will cost the same to implement if all other elements in the RAID set are equal. However, if you implement RAID 4, the implementation cost may be governed more by limited options of array vendors than by the number of parity drives required.
This was first published in December 2010