Storage sizing mistakes you should avoid: Part I

Storage sizing mistakes could adversely impact IT setups. This tip explains how to evaluate importance of capacity and performance during storage sizing.

Storage sizing is an important parameter in the overall business objectives of organizations. Improper storage sizing may lead to unsatisfactory application response time, causing dissatisfaction among end users. This could impact revenue growth in the long run.

It is vital that the storage is right-sized at the outset, in order to avoid operational pains later on. Steer clear of the following storage sizing mistakes to ensure steady revenue growth.

1)      Don’t neglect management tools

Management software is crucial for monitoring storage infrastructure and reporting the usage patterns in your environment. If you do not have a storage management tool, you cannot monitor storage; if you cannot monitor it, you cannot measure it; and if you cannot measure storage, you cannot arrive at the ideal storage sizing for your application’s requirement.

Monitoring tools are integral to the overall business requirement, as they help measure the capacity and performance requirement of the business and highlight challenges that might be faced in your current environment.

2)      Avoid upgrading mistakes

While carrying out storage sizing for capacity or performance you have to consider non-disruptive upgrades to your infrastructure to ensure there is no impact on business applications. Storage infrastructure should be organized such that it is efficient enough to upgrade and add capacity to take care of any additional performance requirements.

To emphasize the importance of non-disruptive upgrades, consider the example of ATMs that are in continuous use. Nobody likes it when the ATMs are down, so the architecture should be such that non-disruptive data movement, migration and storage upgrade are easily achieved.

3)      Don’t  overlook application response time

It’s important to understand the disk performance with regards to the input output per second (IOPS) and the measurement of bandwidth during storage sizing. For typical OLTP applications, IOPS play a vital role. The bandwidth required for smooth application performance must be established and response time of applications cannot be ignored. In banking applications, IOPS may be good, but if response time is a slow 15 milliseconds, it may have a cascading effect of slowing down activities down the line.

4)      Avoid mistakes when determining the right protocol for the host interface

For SANs, iSCSI is the most prevalent connectivity option for low I/O in the SMB market. While independent evaluations predict that the growth in iSCSI SANs will continue at the lower end, the fiber channel (FC) protocol holds sway for high-performance applications. However, disk technology is advancing and gradually moving away from FC disks to multiple SAS disk drive technology, which provides higher performance and flexibility in configurations.

Solid state drives (SSDs) currently have a significantly higher capital expenditure cost than conventional disk technology, and this is unlikely to fall dramatically. However, SSDs benefit organizations that require high read performance of specific data sets in business-critical applications. They require less power and cooling, and are quiet during operation.

During storage sizing exercises, remember that flash drives are beneficial for the following requirements:

  • High transaction databases for indexes, roll-back segments and frequently accessed tables.
  • Frequently accessed Web content.
  • Applications with high random read requirements.
  • Business-critical applications impacted by low cache read hit rates.

5)      Don’t place capacity over performance

During assessment, solution providers base storage sizing on the capacity required. Today with higher density disk drive technology, capacity shouldn’t be a concern while designing a storage solution; instead, application performance should be the parameter considered. Understand the application input/output profile during storage sizing, and size the disk infrastructure to meet the performance requirement.

6)      Don’t forget about the overall architecture before sizing

Solution providers may take the performance levels into consideration and may provide appropriate spindles to achieve this requirement, but the architecture where these disk drives are hosted may not be sufficient enough for robust application performance.

Appropriate infrastructure in which the storage devices are placed is yet another parameter you must consider while storage sizing. Efficient storage architecture is one that provides the overall IOPS to the application.

7)      Don’t disregard the importance of tiering of data

Tiering of the data based on the applications is critical during the storage sizing exercise. You must understand the different types of applications (Tier 1, 2, 3) and their capacity requirements.

In general, all operational data is considered to be Tier 1, all application data is considered to be Tier 2, and all reference data is considered to be Tier 3. Organizations should develop a catalog of tiers by carrying out an assessment with pre-defined characteristics, and subsequently use these catalogs to allocate storage to applications. You should also recognize the performance SLAs that need to be delivered and accordingly go about storage sizing.


About the author:  Srinivas Rao is the director, pre-sales and solutions, at Hitachi Data Systems, providing pre-sales support for file and content services solutions across India. With 14 years of technical experience, he holds an electronics degree in engineering from the University of Mysore.

(As told to Mitchelle R Jansen.)

Read more on Storage management and strategy