alphaspirit - Fotolia

How to make the right storage infrastructure decisions

The total cost of ownership for a storage hardware infrastructure often runs significantly higher than its purchase price, so how can you make the right decisions and lower costs?

Data is growing at an exponential rate and it's critical to store, retrieve and archive it efficiently and cost-effectively. The total cost of ownership for a storage hardware infrastructure often runs significantly higher than its purchase price, so how can you make the right decisions and lower costs? The answer lies in maintenance, labour and power.

Many storage infrastructure upgrade plans don't calculate the actual requirements of a project. Many times, too much of a specific kind of hardware is purchased without first deciding which server components are necessary to drive the required storage hardware efficiently. This often results in poor performance, as well as a frustrated customer and/or storage team. In many cases, the purchased hardware can't be used effectively and sits idle.

When making new storage infrastructure decisions, it's crucial to fully understand the application I/O profile for storage-area network (SAN) disk deployments and the amount of data to be backed up and cloned in your environment. Only then should you consider buying anything for your new infrastructure.

If you concentrate on the backup infrastructure as a baseline for how items should be planned, the same principles can be applied to any infrastructure situation. It's a very common mistake to assume that buying more tape drives equals more backup throughput to your tape library, and thus a reduction in the associated backup window and cloning processes.

I've seen the following scenario several times: a backup environment with 12 LTO-4 drives in the library, but with only two host bus adapters (HBAs) to drive them. This is a classic example of misunderstanding how to make infrastructure purchases and where capital expenditures can be reduced.

If you look at the theoretical throughputs of an LTO-4 tape drive and a 4 Gbps HBA, it becomes clear that two HBAs won't cut the mustard and additional ones need to be installed into the backup servers to prevent server-to-tape bottlenecks and shoe-shining on the drives. Each LTO-4 tape drive can write at a native rate of 120 MBps. Data needs to be fed to the drives at this rate from the backup server for maximum efficiency. For LTO-4 tape drives, no more than three drives should be presented per 4 Gbps HBA because the drives can push the theoretical maximum out of the host bus adapters. Any more and the drives will operate less efficiently, which causes more wear on the drives and tape cartridges, as well as an increase in maintenance and labour costs.

A better solution would be to purchase fewer tape drives but more HBAs, which are cheaper. The added bonus of doing the upfront calculations might result in reduced maintenance costs because your drives aren't shoe-shining and causing wear to drives and cartridges.

The above example can be applied not only to the backup solution, but to any storage and server infrastructure component. Once these principles are applied throughout the organization, potential cost savings can be realised in terms of CAPEX and, perhaps more importantly, OPEX.

About the author: Spencer Huckstepp is a technical consultant at GlassHouse Technologies (U.K.), a global provider of IT infrastructure services. He has 11 years of IT industry experience, eight of which have been in the enterprise storage arena. Huckstepp's role includes involvement with various virtualisation, storage and backup strategy engagements.

Read more on Storage fabric, switches and networks