Storage capacity management is vital to ensure that an IT organisation keeps up with business growth and can provide customers with the services they expect. Unexpected spikes in SAN capacity requirements can create headaches for the IT department and potentially threaten business continuity.
In this interview, SearchStorage.co.UK Bureau Chief Antony Adshead speaks with Steve Pinder, principal consultant with GlassHouse Technologies (UK), about the key mistakes made in storage capacity management, best practices and how the IT department can implement a service-based model for storage provision.
You can read the transcript or listen to the MP3 file below.
You must have Adobe Flash Player 7 or above to view this content.See https://www.adobe.com/products/flashplayer to download now.
Download for later:
Download the podcast with Steve Pinder
• Internet Explorer: Right Click > Save Target As
• Firefox: Right Click > Save Link As
SearchStorage.co.UK: What are the key mistakes people make when managing SAN capacity?
Pinder: People make many mistakes when managing SAN capacity. The common thread [has] to do with lack of planning for the future.
There are a couple of areas where I can give examples. The first one is growth rates: How fast is the SAN growing? If you don't know how fast the SAN is growing, how can you plan for what SAN you need to cover that growth rate?
Upcoming projects are another one. If IT is not in line with the business and they don't communicate, they may have a perfect example and knowledge of the growth rate for business-as-usual operations but if a project comes along that needs 50 TB of storage, this can't be planned for unless the IT department [knows about it].
Another example is understanding storage placement. An IT department may have many terabytes and terabytes of capacity free for upcoming projects and business-as-usual growth, but if they don't understand which tier of storage they need they could have plenty of Tier 1 storage ready for replication but all the business needs is some archive storage. Alternatively, there could be plenty of archive storage available, but the business needs replicated Tier 1 [storage].
The common thread running through these issues is effective planning and [understanding the needs of the business and] making sure there's enough storage available for [those needs].
SearchStorage.co.UK: What are some best practices for [SAN and storage capacity management]?
Pinder: As I said before, the key to minimising issues is having accurate information on what will happen in future. If you have this information you'll be able to plan where pinch points are and … how to … mitigate against them.
At GlassHouse we try to help customers deploy what's called a service provider model. What that means is that the IT function designates a certain number of service levels that IT will give to the business with regard to storage and also backup and other functions.
What it means is that the IT department will be able to provide a certain capacity of storage within a certain number of days. This could be 10% of the total capacity [or] a few terabytes. … If there's a service level there that IT will provide, it means that the business will be confident in the IT department's ability to provide storage within a certain timeframe at a certain level of service.
From a business unit perspective, they don't really care [about] the underlying storage and the services the IT department is providing as long as the storage is able to meet these service levels.
It's an emphasis on service standards, not technology, and it allows the IT department to go out to tender to vendors and get the storage that will apply that service level at the best price for them.
This separation allows the business to have confidence in the provision of storage and ultimately reduce the cost of storage for the business.