Common traps to avoid when undergoing capacity planning

Our expert explains issues that users may encounter when undertaking capacity planning exercises, including misleading performance percentages.


Capacity planning with VMware Sphere
  Introduction: Capacity planning: A how-to guide
  Part 1: VMware vSphere capacity planning recommendations
  Part 2: The challenges of working with VMware Capacity Planner
  Part 3: Common traps to avoid when undergoing capacity planning

Contrary to popular belief, it's by no means mandatory that you must part with your business's hard-earned cash and buy capacity planning software. The reality is that companies frequently find they have to because their estate is so vast and unknown, the prospect of carrying out an inventory manually seems impossible. I believe they are right.

Whenever you see a report that something is running at 100%, whether it is physical or virtual -- beware, because it might not be.

 

Mike Laverick, Contributor,

Using Microsoft Windows or Red Hat Linux tools alone to benchmark a physical host is going to take too long. It can be done, though, and it might be of interest to SMBs. The real point I want to get across is sometimes, when the capacity planning exercise goes wrong, it can be the process of telling you as an organisation what you know already. To be worthwhile, the exercise must cast a new light on your estate -- you should be shocked to discover what it digs out.

Laying that to one side, I want to outline here the kind of issues you should watch out for. At the end of the day, no matter what software you use to collect the information, the real art lies in its interpretation.

So I want to use this last part as a way of flagging up some of the common gotchas and traps people fall into time and time again when doing capacity planning exercises, along with good knowledge of how guest operating systems tools can be helpful.

Memory, memory, memory
By far the most constraining resource in most people's virtualisation environments is physical memory to the hypervisor. This could turn into an argument for "buy as much as possible," which I want to avoid.

My general point is this: as the most constraining resource, it should be the one you look out for the most. If you have a memory hog in the physical world, it will be a memory hog in the virtual world, too. The good news about this is that unlike CPU cycles, memory is a relatively linear resource that doesn't fluctuate greatly from one second to the next. This means you can effectively count up your average and peak memory usage on a physical machine and calculate what it would need for day-to-day operations, along with what it might demand at peak times.

You'll be surprised how much you know…
Frequently, I'm amazed by how much customers know about their own environments before they begin. That might sound like a contradiction to the main body of my argument. Do you remember me saying how clueless some managers and operational staff are about the number of physical servers they have? Well, it cuts the other way too.

The stuff they maintain and support on a daily basis, they know very well. So they are not surprised to find that it's the big enterprise application servers that seem to require the most grunt work, such as Microsoft Exchange, Oracle or SAP.

Even more unsurprisingly, these are precisely the same servers that need extra care and attention during the capacity planning and server consolidation process. They are precisely the type of applications that you want running on their own logical unit number in a storage array and to be surrounded by relatively smaller virtual machines on the same hypervisor.

It's not really 100%...
Whenever you see a report that something is running at 100%, whether it is physical or virtual -- beware, because it might not be.

Firstly, if a physical system is running 100%, that means all its resources say the CPU is dedicated to that process. If this is an old server, it can be quite misleading to see the 100%. By definition, the operating system and the application is circumscribed by the limits of the physical system. It really gives no indication of whether this system would like 101% or, like a contestant in "The Apprentice," 110%.

My point is generally that most performance analysers and capacity planning tools fail at the first hurdle of "What if..." questions we ask. "What if this system was unconstrained?" "What would be the maximum it would want?"

A similar problem can show itself when two virtual machines want to execute on the same core at the same time. By definition, they will want at least 50% of the CPU each, but the guest operating system may well report they are receiving 100%. In short, there are lies, damn lies and then performance percentages.

Conclusion: Yes, it really will be quicker…
Despite all the anxiety about performance and capacity planning, I sometimes think it reveals more about our own human agendas and frailty. It's counterintuitive to say that you can run 10 or 20 instances of Windows on the same physical server and get better performance. But that is precisely what hundreds of happy virtualisation customers find in the main.

It's one of the reasons the technology is so popular and increasingly all-encompassing. It stands to reason that if you replace jurassic hardware with something state of the art, you stand a very good chance of improving service quality rather than degrading it. This is something I've seen time and time again.

I often have lengthy discussions about performance analysis after a virtualisation project is over. The anxiety levels go sky high, only to find that the performance is better. Moving forward, I can see that in years to come these capacity planning tools will morph themselves into systems that analyse and report back both your physical and virtual estate it's a direction that companies like PlateSpin have already undertaken.

Mike Laverick

ABOUT THE AUTHOR: Mike Laverick is a professional instructor with 15 years experience in technologies such as Novell, Windows and Citrix, and he has been involved with the VMware community since 2003. Laverick is a VMware forum moderator and member of the London VMware User Group Steering Committee. In addition to teaching, Laverick is the owner and author of the virtualisation website and blog RTFM Education, where he publishes free guides and utilities aimed at VMware ESX/VirtualCenter users. In 2009, Laverick received the VMware vExpert award and helped found the Irish and Scottish user groups. Laverick has had books published on VMware Virtual Infrastructure 3, VMware vSphere4 and VMware Site Recovery Manager.



Capacity planning with VMware Sphere
  Introduction: Capacity planning: A how-to guide
  Part 1: VMware vSphere capacity planning recommendations
  Part 2: The challenges of working with VMware Capacity Planner
  Part 3: Common traps to avoid when undergoing capacity planning

 

CW+

Features

Enjoy the benefits of CW+ membership, learn more and join.

Read more on Virtualisation management strategy

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchCIO

SearchSecurity

SearchNetworking

SearchDataCenter

SearchDataManagement

Close