GeoffHough150

Everyone seems to want a tiered storage architecture but the correct approach is essential to business efficiency, says Geoff Hough

Today the rush is on to implement tiered storage architectures; but do they deliver a good return? A well-tiered environment enables easy optimisation of storageservice levels previously over- or under-provisioned (for example, a mission-critical application may benefit from being migrated to a more available platform, or incremental savings can be gained from transferring data to lower-cost storage).

This upside, however, must be balanced against the costs of maintaining a well-tiered environment. These stem from the time to manage data movement, and the expense of owning and managing incremental migration tools and storage platforms.

For example, proper planning and testing of migrations reduces risk, but consumes administrative time, usually weeks; target platform provisioning and verification also requires time to assure suitable performance, interoperability and integration with the application and the surrounding infrastructure. Also, professional services may be needed before, during or after migrations if expertise in various procedures or systems is not available in house.

So, clearly defining "success" is important. Can one clearly identify the gains that will be achieved? And can one identify and assign costs? After selecting the target data for potential optimisation, you must consider the best way to move and track data; the selected approach will significantly influence cost.

Although intolerance for application downtime is widespread, downtime is tolerable for many applications. It may even be preferred because known migration tools and procedures (that typically require downtime) can reduce risk and cost. These methods include backing up and restoring data to a target platform or using the data movement and compression facilities found in host operating systems, databases or freeware.

When traditional methodsresult in unacceptable downtime, other solutions deserve attention. Host-based, value-added volume management approaches enable migrations to be made on-line and flexibly between supported heterogeneous storage platforms. SAN or fabric-based approaches are perhaps the most complex, because they introduce a platform that sits logically between host servers and arrays. These appliances or "smart switches" promise the same advantages as host-based approaches but transfer migration workloads from hosts to themselves while consolidating some migration management. In both cases the migration facilities become permanently managed elements of the infrastructure if the on-line migration capabilities are to be maintained.

Some suppliers offer array-based software to migrate data online within an array. Host and network resources are unaffected with this approach, although the performance characteristics of a given array should be understood to anticipate any disruption to service levels during migrations.

Although this approach offers enhanced simplicity and risk avoidance, its value depends on the degree to which a user has consolidated or can consolidate multiple service levels on a given array.

A storage environment that enables dynamic service level optimisation is attractive, but before deploying one, organisations should define what optimisation means to them and weigh its merits and feasibility relative to other means of meeting their goals.

Geoff Hough is director of product marketing at 3Par, which is exhibiting at Storage Expo

Email Alerts

Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

This was first published in October 2005

 

COMMENTS powered by Disqus  //  Commenting policy