Tune Raid performance to get the most from your HDDs

Tune Raid performance at the storage controller to match application I/O demands to optimal hard drive (HDD) performance

Tuning Raid performance has the air of a black art to storage administrators, with the perception that it can do more harm than good. Most consider the job done once the Raid level is selected. But, it is possible to fine-tune some Raid parameters to anticipate application storage demands and improve performance.

Array controllers are factory-configured to provide the best all-round settings for general use environments and manual Raid performance tuning can sometimes have mixed results. But, where the environment is tightly defined and I/O behaviour well understood, there are definite performance benefits to be gained.

So how can storage administrators carry out Raid performance tuning to improve read and write performance on enterprise storage controllers that offer a full range of configurable options?

Raid performance tuning fundamentals

In Raid tuning, infrastructure knowledge is power; you need to understand how applications utilise disks to be able to improve Raid performance.

For example, general file-serving accesses disks in a completely different way to virtualisation applications, so approaches to Raid performance tuning for each are quite specific, although governed by the same general principles. Namely, the key to selecting the correct Raid level and setting stripe size for a volume is to determine the average I/O block size generated by host applications.

Then you need to determine whether I/O demand is read or write biased and how much burst and randomness of I/O will occur. From these results you can set the Raid level and stripe size of the array accordingly. Getting Raid level selection wrong means the house is built on sand and no amount of tuning will help.

Cache and Raid performance

One of the key limitations of spinning disk hardware is the use of a physical reading armature and the time this takes to move across the surface of a disk’s platter to the required data blocks.

The time penalty this incurs is called “seek cost” and the less movement needed by the armature the better. The way to lower seek cost is to facilitate the writing of cached data in sequential blocks across a whole Raid stripe – that is, across several disks – wherever possible and also to the most efficient parts of those disks.

To effect that distribution of data in a Raid array you need to set an optimal cache write block size allocation in relation to application I/O requirements and Raid stripe size.

For example, if a Raid 5 data segment across four drives on an array is set at 64k and the cache write block size on the controller is set to a much smaller value, for example 4k, then cache would perform 16 writes across multiple Raid stripes creating non-sequentially written data blocks. If, however, the Raid 5 segment is 64k and a cache write block allocation of 16k is set, this would generate four writes and achieve a complete stripe and parity write in one action.

Sequentially written data blocks are more efficiently read than those randomly placed. This sounds straightforward but it can be tricky to achieve because higher performance through this route depends on a sound knowledge of the I/O demands of the host application and the most efficient cache data block and Raid segment size in a specific scenario.

In mixed application environments only the average values of different applications can be used to generate these figures, especially in high random I/O access scenarios such as dealing with virtual machines. But in more predictable application environments, such as database volumes, performance gains can be more easily achieved.

Short-stroking

Another feature of disk hardware is that, because disk platters are circular and spinning, the outer edges move faster than the inner due to their greater circumference. This second physical limitation in disk hardware is known as “rotational latency” which, combined with seek time, comprises most disk operations.

By a practice called “short stroking” it is possible to limit data storage to the outer third of a physical disk platter to exploit its higher rotational speed and at the same time minimise armature movement. 

The rule which generally applies is that, if data is kept below a third of array capacity, it will be written to the fastest outer third of each disk platter equally across the array. If the data volume exceeds this value then performance will decrease correspondingly to the slowest portion of disk platter in use. 

Managing spare capacity efficiently and not overloading arrays with data can therefore be seen as a simple form of tuning.

LUNs, parity checking and Raid performance

The way LUNs are defined can also be manually configured to boost performance. A LUN element is the size of data block written to one disk in a Raid stripe. In general, having a lower element value for the LUN makes the distribution of data on the volume more efficient, but setting it at too low a value can result in forced writes to multiple Raid stripes.

If the I/O operation size consistently exceeds the LUN element size the result is continuing forced writes across multiple disks so playing with this value can be risky and performance affecting. But in situations where I/O is more predictable it is possible to optimise this value and achieve higher efficiency.

Parity checking also imposes a load on Raid environments as it consumes the same array resources as I/O operations. And, if a disk fails, considerable system resources are allocated to rebuilding the array onto spare disks and performance degrades significantly during that process.

These settings are usually defined by the rebuild and verify priority of a particular LUN and making adjustments here can be beneficial. 

For example, a high I/O application environment with large I/O bursts might be best set to a low priority for array rebuilds to maintain acceptable performance in case of drive failure, but a higher level of parity checking to maintain data integrity across the array.

Large capacity drives

With the advent of large capacity drives, the physical limitations of Raid are becoming more apparent. While capacities have greatly increased, the data throughput rates of physical media have remained static and this has several effects in Raid arrays.

Failed drive rebuild times increase with capacity, so parity checking becomes more important to ensure the ability to recover from failures. The greatest amount of parity checking comes with Raid 6 and this probably makes it the optimal Raid level to use with 2TB drives in an array, but it is inefficient for high random I/O environments.

Martin Taylor is support team leader at Capita Financial Systems.

Read more on SAN, NAS, solid state, RAID

CIO
Security
Networking
Data Center
Data Management
Close