More SAN capacity is not always better

Focusing on the storage capacity an environment needs, rather than how you might go about presenting that capacity or how the application will use it, is a recipe for disaster.

The exponential rise in storage array capabilities has brought with it an increase in the potential for misconfiguration, which would not only drive capabilities down but, in some cases, bring systems to a grinding halt. Just consider these two facts:

  • Just five years ago, the most common drive size for enterprise disk arrays was 36 GB.
  • Only 10 years ago, the most common form of disk array was the SCSI RAID JBOD. Fibre Channel networks were but a twinkle in the industry's eye.
Imagine buying an Aston Martin Vanquish, the engine growling under its polished British Racing Green bonnet (surely the only colour to get an Aston in!), waiting to unleash its 460BHP fury on the tarmac. Now imagine fitting a roof rack on it, strapping several concrete blocks to the rack, filling all discernible space inside the car with heavy steel weights, then driving away with the hand brake on.

This is how many SAN systems have been treated – with functionality for provision of data storage capacity overriding any thought behind the most efficient way to use it.

Some time ago I performed an environmental audit. The business had decided to take advantage of the larger drive size capabilities and created a RAID 5 group consisting of fifteen 500 GB SATA drives, creating a single presentable volume of around 7 TB for their Windows file share environment.

Masses of capacity – huge pitfalls. How long might it take for the OS to run a check disk on a single 7 TB volume? How long might it take to restore 7 TB to a single set of spindles? Again, focusing on the storage capacity an environment needs, and not about how you might go about presenting it or how the application will use it, is a recipe for disaster.

In many SAN systems, functionality for provision of data storage capacity overrides any thought behind the most efficient way to use that capacity.
I had another client who was suffering severe storage performance problems. At certain times during the working day, some of their application served by a storage array with massive theoretical I/O capabilities went from being slow to almost snail's pace.

A closer inspection of the array provided some clues as to what may have happened. Whilst the RAID groups looked tidy -- a good mix of the right RAID type for the right type of application and also using suitable drive types for the performance requirements -- the number of different I/O-heavy applications sharing the same physical spindles was horrendous.

Four heavy-duty SQL databases were sharing the same transaction log spindles as each other, in addition to one of the Exchange cluster's log files. A large virtual environment was vying for control of another RAID 5 spindle, alongside many smaller individual application servers, each having its own little chunk of capacity carved up from the same hard working spindle. A case of "too many cooks?"

Bottom line: A storage array with the potential to perform like an Aston Martin shouldn't be driven like a bus.

About the author:Allaster Finke is a senior consultant for GlassHouse Technologies (UK), a global provider of IT infrastructure services. He has more than seven years experience in the design and delivery of IT solutions, with a focus on SAN, storage and backup technologies.

Read more on SAN, NAS, solid state, RAID

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.