Capacity planning for server environments has always been a hassle. In the days of one server per application, it was fairly easy to map application requirements with server requirements. While that approach wasted a tremendous amount of server resources, it also reduced the likelihood of performance-killing resource contention. Now, with the advent of virtualisation, capacity planning has remained a pain but the questions have changed. If I were to develop a greatest-hits list for the capacity planning questions for virtual environments, the following would top the charts:
- How many virtual machines (VMs) can I place on a specific host server?
- How many VMs can I support based on my current data center infrastructure?
- How can I get the most value from investments in my virtualisation infrastructure? (If you're like me, you may find it difficult to resist a clever but rude response to this one, but I'll refrain.)
- Where's the best place to put this VM?
In this tip, I'll outline some ways to make educated decisions about capacity and VM placement using Microsoft management products as examples.
Benefits of the scientific method
Before we jump into technology, I think it's important to stress the importance and value of planning. It's easy to just throw some virtual machines onto a host server and wait to see if users complain, but this approach often leads to substandard placement of VMs and irate users. Plus, users will never complain that their VMs are running too quickly or that their applications are performing above service-level agreement (SLA) requirements.
An organized capacity management initiative involves close measurement of application requirements, host server resource utilisation and performance trends over time. You can then analyze this data to make informed decisions and predictions about data center resource allocation. Assuming you're sold on the value of a scientific approach (versus the "just throw it on the heap" approach), the main challenge is related to collecting and analyzing the requisite data.
Establishing a performance-monitoring system
All current versions of the Windows platform provide a variety of methods for obtaining and recording performance data. One example is the ever-popular Windows Performance Monitor (also known as System Monitor). With a little practice, you can monitor just about every operating system (OS) component, application and service with this tool. Performance Monitor has numerous capabilities, some of which may be cleverly hidden to some system administrators:
- The ability to monitor statistics from different servers or workstations using a single console or job
- Scheduling options to start and stop data collection
- The ability to write data to binary or text files and send it to a relational database
- Options to load recorded performance data for later analysis
While these features can be helpful in many scenarios (such as troubleshooting), there are thousands of available performance statistics on most server OSes. Figuring out exactly what to collect can be difficult, and when multiplied by the number of systems in the environment (hosts and guests), this manual approach can quickly become unmanageable.
Programmers do it with class[es]
In the world of Windows OSes and programs there are numerous ways to create, access and collect performance statistics. For example, developers can easily add instrumentation (that is, custom performance counters) to their applications with just a few lines of code. PowerShell, WMI, COM, VBScript and the .NET platform are all options for analyzing and connecting to performance data. Enterprising administrators can find ways to monitor large numbers of systems using a custom application or service. But, like the manual approach, this can take a lot of time and development expertise.
Can't someone else do it?
What if you're one of those administrators who doesn't look kindly upon added work and effort? You know, the kind that has other things to do before going home at 9:00 p.m.? Fortunately, the virtualisation industry has spawned many products and services that help organisations make better virtualisation decisions. Most of these products provide varying levels of monitoring and reporting features with the overall goal of supporting better capacity planning and deployment decisions.
Monitoring and optimisation with System Center
Microsoft's answer to managing data center resources is delivered as several products that are part of its System Center suite. You can visit the Microsoft System Center website for details, and you can download Virtual Hard Drives for evaluation purposes. Of particular interest in the area of capacity planning are System Center Operations Manager (SCOM) and System Center Virtual Machine Manager (SCVMM).
As its name implies, SCOM is designed to monitor the entire data center environment. It has the ability to detect a wide variety of problems and to automatically take corrective actions or alert administrators when necessary. SCOM also creates a performance database that can track resource usage statistics for all of the systems in the environment. This data is collected, stored and analyzed automatically and can help admins determine which servers have additional capacity and which might soon suffer a stress-induced heart attack.
Once you have the necessary performance data, it's time to utilize it to make better capacity planning decisions. SCVMM can tap into the data collected by SCOM to provide recommendations about VM placement. Its Performance and Resource Optimisation (PRO) algorithm takes into account the technical requirements for the VM along with resource estimates. It then compares this information to the current and historical performance data collected by SCOM for all systems in the data center. Based on some mathematical wizardry, it boils the data down to a five-star rating system that even the least tech-savvy boss could understand.
PRO is also able to monitor VMware hypervisors and provides extensibility to allow third-party developers to create their own management packs. Of course, SCVMM's work is far from finished after a VM is deployed -- it also continuously monitors the entire environment and can make change recommendations based on predefined policies and rules.
There are numerous ways to collect performance data and use it to make more educated decisions related to capacity planning in the data center. All of the methods involve some level of time investment, expertise or both. But considering the alternative -- blind guesses or trial and error (with an emphasis on the latter) -- you'll likely find that it's worth the effort to invest in capacity planning tools and methods.