Capacity planning in the enterprise
Capacity planning is becoming an increasingly important way to balance future hardware costs with computing needs.
Capacity planning is becoming an increasingly important way to balance future hardware costs with computing needs.
Any enterprise, from a mom-and-pop e-tailer to a multinational corporation, can benefit from an explicit plan for future technical requirements. Capacity planning's benefits range from avoiding the inconvenience of running out of disk space to ensuring enough CPU power to handle a global corporation's computing workload. Yet capacity planning is more than simply preparing to add CPUs or disk space. Figuring in such other requirements as network bandwidth and personnel makes capacity planning as much an art as a science.
At the management level, capacity planning can serve as a reality check, prompting an organization to come to terms with where its business is and what its goals are for the near future. Generally, an enterprise will begin with gathering and analyzing current data. Next, projections of future needs are made using the current data and the estimated resources required to accomplish business goals.
The procedure sounds straightforward enough, but, in practice, approaches to capacity planning can vary widely. Several IT veterans offer insight into their own personal strategies:
Gerhard Adam, president of SYSPRO of Grants Pass, Ore., cautions that systems should be well tuned before analyzing current workloads. "A poorly tuned system cannot be used to determine future capacity requirements, since the existing capacity is being misused," he said. Moreover, some companies focus on their production workloads but ignore development or maintenance work, said Adam, who works with IBM S/390s. "It is unrealistic to devote all the resources to production work, only to discover that the rest of the organization comes to a stand-still because the lower priority work doesn't run at all," he said.
When in the early stages of planning, Adam accumulates data from the user community to get an idea of future growth, especially in terms of application use. CPU and memory usage are examined, and a workload profile is developed to anticipate to what extent resources would be affected. A modeling tool can illustrate the growth in one application and its impact on lower priority work, he said.
"You can't plan for the future if you don't know where you stand now," said Jamie Wilson, president and CEO of JTW Internet Services, Inc. (http://www.jtwis.com) of Tampa, Fla. Wilson uses Sun Enterprise Servers running Solaris. When planning capacity, he starts out using system tools such as "sar, netstat and vmstat" to determine what capacity the system can handle.
He then projects what his future growth will be based on past and current workloads. To be safe, Wilson then doubles his projected growth and plans for that capacity.
"Stay ahead of the curve," Wilson said. "Don't get so far behind that instead of planning for capacity you'd be avoiding outages."
Performance specialist Peter Gulotta of Melillo Consulting, Inc. (http://www.mjm.com) of Somerset, N.J., said capacity planning is as much an issue of communication as an issue of technology. "Capacity planning has to be a collaboration among systems, database and network personnel. Those areas are so intertwined today."
For example, a company may be concerned about running out of disk space. The systems administrator checks the system and finds disk space is almost to capacity. Yet the database administrator (who is usually in a different department) knows that the disk isn't actually full but contains database tables that make it appear full. A simple conversation would clear up any confusion, said Gulotta, who works with Unix-based Hewlett-Packard servers.
Ron Herardian, CEO of Global System Services (http://www.gssnet.com/), of Mountain View, Calif., looks beyond server usage when planning capacity for Domino. While many companies focus on the aspects of RAM and CPU usage, network bandwidth requirements and personnel must also figure into the equation. "If your bandwidth is expensive, then it might actually be cheaper to buy another server. But that might mean having to hire a third administrator, which would cost a lot more," he said.
Herardian approaches capacity planning scientifically. He first uses models to project 12 months of expected growth based upon measurements of current use. Herardian uses TPC-C, an on-line transaction processing benchmark, to see which systems could handle his projected growth. He also recommends using a network analyzer to measure Web traffic.
In the end, Herardian tries to err on the side of over-engineering. In the vast majority of cases, companies would ultimately use excess capacity. "You can always scale back. The opposite isn't always that easy," he said.