Managing storage for virtual environments: A complete guide
A comprehensive collection of articles, videos and more, hand-picked by our editors
Put simply, storage capacity planning is the process of understanding how much storage is available, where it is, how to divide it between the various applications and users that require it, and how to maximise its utilisation.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
With the growth in importance of storage and its growing technological diversity, along with the increasing demands being placed upon it, especially by virtual machines, the need for storage capacity planning has never been greater. The alternative is to wait until problems arise, but that can mean spending far more time firefighting and implementing poorly thought-out quick fixes.
Virtualisation complicates the picture. A physical server has natural physical limits; to increase its capacity you need to buy more memory, storage and CPU, or a whole new server. A virtual server has none of these limitations, and its needs can quickly scale up and down as demands change. This rapid scaling means that VMs add complexity from a capacity planning point of view, whether storage, CPU or memory, and so make the process of planning and growth prediction harder.
This is especially true in an orchestrated data centre, where VMs move from host to host as resource needs and availability change. In such cases, the VM management system will need to know how much and which types of storage are available, as well as whether the subsystem meets certain criteria, such as high availability and data protection.
As an example, virtualisation admins often need to know how many more VMs storage systems can support. The storage manager will then need to model different scenarios and scale them to match likely real-world workloads. This will involve categorising or profiling VMs, rather than just averaging capacity usage across all VMs. Profiles might be email, database, application or Web server, for example, reflecting the fact that a VM hosting a database will manifest different growth and usage patterns from one providing email services.
To start the planning process, gather data from VMs over a period of days or even weeks to ensure that you have captured underlying trends. You -- or your storage planning tool -- should collate data on CPU, memory and storage usage, and network and storage throughput and I/O. Quantifying and analysing how different types of VM use storage helps to answer questions about capacity and performance. For example, trending the data allows you to predict how much longer existing storage capacity can support existing and expected levels of demand and when you will need to acquire more or reallocate capacity.
Storage capacity planning can help you decide not just how much but what sort of storage you will need and where best to deploy it. Profiling capacity, throughput and I/O loads helps with understanding and determining what protocols, redundancy protection and array features you need to implement. Identification of issues could point towards the need, for example, to change drive types or RAID configuration used, to reroute some data, or simply to add disk capacity at a faster rate.
Further steps you should take include developing warning and critical thresholds, which define the upper levels of I/O and capacity utilisation at which performance starts to become constrained. These will vary, of course, by workload and application.
Modern storage technologies can help reduce the cost of the future storage requirements that a capacity planning process predicts.
For example, automated storage tiering can help to reduce the demands of high-speed storage not just by managing application data but also by ensuring that only appropriate elements of a VM reside on the first tier. Page files change constantly and are very frequently accessed so their virtual containers can reside on Tier 1 SAS drives or even Tier 0 SSDs, for example, while OS images -- whose I/O requirements do not greatly affect overall performance -- might live on Tier 2 SATA disks.
Meanwhile, data deduplication can reduce overall volumes of data, especially OS images, where there will be a high degree of commonality across maybe dozens or even hundreds of VMs.
Thin provisioning means that storage is only allocated when required rather than being allocated upfront. In addition, a good planning tool should be able to reclaim unused disk space, even space that is thinly provisioned but unused. An example of that would be space provisioned using a standard VM template with a large default disk size but which one or more VMs have not actually claimed. Such volumes could then be resized and the capacity reallocated.
Storage capacity planning tools should be able to aid all these processes by modelling multiple what-if scenarios, taking into account not just today's usage patterns but also future deployments of new technologies during the next budget cycle.
The key benefit of storage capacity planning tools is that they should help reduce or contain costs by avoiding overprovisioning, and providing actionable intelligence that is easier to use than building spreadsheets, which, in a virtual environment containing hundreds of servers, can quickly grow to become large and complex.
Capacity planning tools
Aptare StorageConsole Capacity Manager offers views across storage systems including utilisation per array and LUN. It can highlight LUNs that have exceeded an allocated storage threshold and are in danger of running out of storage, and it can forecast storage array capacities based on current and historical storage allocation.
Dell EqualLogic SAN Headquarters (HQ) aims to identify performance bottlenecks and help with planning by identifying storage growth requirements. It also provides consolidated performance and event monitoring. This product only works with Dell EqualLogic SANs. All other capacity planning tools from the storage system vendors mentioned below claim to work with other makers’ arrays.
EMC Ionix ControlCenter StorageScope offers capacity utilisation reporting and trend analysis across heterogeneous storage including VMware and virtually provisioned storage. It allows you to plan for future capacity requirements using customised reports, and it will identify and reclaim stranded and underutilised storage assets.
IBM Tivoli Storage Productivity Center's features include automated system discovery, provisioning, configuration management, performance monitoring and replication. It provides storage metrics and analytics, including data on storage capacity, utilisation and performance, with trend analysis to help identify application workload contentions.
Microsoft System Center Operations Manager 2012 offers wide capabilities for Microsoft shops, including resource utilisation optimisation from a number of management packs from Dell, HP, NetApp, Quest and vKernel, and reporting of resource utilisation, which then allows you to trend the data.
NetApp OnCommand Balance is designed to optimise infrastructure performance and capacity. It uses modelling and analytics to provide an understanding of how application workloads, utilisation levels and resources interact, and it provides reports to aid future planning, such as predicting when particular servers will run out of capacity.
SolarWinds’ Virtualization Manager (acquired from Hyper9) performs capacity planning via a dashboard, supports Hyper-V and VMware ESXi/vCenter, and allows you to predict future bottlenecks. It also helps to determine how much it would cost to run all or part of your virtual deployment in the public cloud and provides reports on usage by line of business and departments.
SolarWinds also sells Storage Manager, Powered by Profiler. Using a dashboard and a drill-down approach, it offers a range of features. Its storage capacity planning module automates data collection with the aim of making it easy to view storage growth rates, project when storage capacity will be reached and forecast costs. It can track performance over time to identify hot spots, peak hours and potential outages.
Virtual Instruments’ VirtualWisdom hardware-based system provides real-time monitoring and analysis, helps to identify and reduce overprovisioned SAN ports, and provides data on latency and throughput per port.
VMware View Planner is aimed at customers planning VDI rollouts. It consists of a virtual appliance that simulates workloads for different user types. It does this by reproducing typical actions of a Windows end user running productivity applications. As you would expect, it integrates with ESXi, vCenter and VMware View.
Manek Dubash is a business and technology journalist with more than 25 years of experience.