Sizing server hardware for virtual machines

How many virtual machines can you put on a server? Depends upon whether your hardware and virtualisation environment are well-matched. This tip offers four criteria for selecting the right servers for VMs.

How many virtual machines can you fit on a host server? That's a frequently-asked question when IT pros consider which hardware options to purchase for their virtual hosts. In this tip, I share things I've learned when choosing servers for different types of virtual machines and to fit the current and future needs of a virtual environment.

You may be able to fit as many as 100 VMs on a single host, or as few as two. The types of applications that you are running on your virtual machines will largely dictate how many you can put on your host server: for example, servers that have very light resource requirements, such as web, file and print servers, versus servers with medium to heavy resource requirements, such as SQL and Exchange servers. Overall, one should analyze the performance usage of a current environment to get a better understanding of the virtual environment requirements.

Four criteria for sizing up host servers

There are four major criteria to consider when sizing up server hardware: memory, CPU, network and disk resources. Let's start with memory which is typically used up on host servers first.

Memory When it comes to figuring out how much RAM to put in a host server, I would recommend installing the maximum amount if possible.

Yet, the opposite mentality should be taken when it comes to allocating memory for virtual servers, by only giving a VM the exact amount of memory it needs. Usually with physical servers, more memory than what is needed is installed and much of it ends up being wasted. With a VM, it is simple to increase the RAM at any time, so start out with the minimum amount of memory that you will think it will need and increase it later if necessary. It is possible to over-commit memory to virtual machines and assign more RAM to them then the physical host actually has. By doing this you run the risk of having your VMs swapping to disk when the host memory is exhausted which can cause decreased performance.

CPU With the advent of multi-core CPUs, it has become easier and cheaper to increase the number of CPUs in a host server. Nowadays, almost all servers come with two or four cores per physical CPU. A good rule of thumb is that four single CPU VMs can be supported per CPU core. This can vary by as much as 1-2 per core, and up to 8-10 per core based on the average CPU utilisation of applications running on VMs.

A common misconception with virtual servers is that a VM can utilize as much CPU megahertz as needed from the combined total available. For example, a four CPU, quad core 2.6 GHz would have a combined total of 20,800 megahertz (8 x 2.6 GHz). A single vCPU VM however, can never use more megahertz then the maximum of one CPU/core. If a VM has 2 vCPUs, it can never use more megahertz than the maximum of each CPU/core. How many cores needed will also depend on whether multiple vCPU VMs are used or not.

You should always have at least one more core than the maximum number of vCPUs that will be assigned to a single VM. For example, don't buy a two processor dual-core server with a total of four cores and try to run a four vCPU VM on it. The reason being that the CPU scheduler of the hypervisor needs to find 4 free cores simultaneously each time the VM makes a CPU request. If there are only a total of four available cores, the performance will be very slow. My recommendation would be to use quad core CPUs because more cores provide the CPU scheduler with more flexibility to process requests.

Network The number of network interface cards (NICs) needed in a virtual server will vary based on how much redundancy is desired, whether or not network storage will be used and which features will be selected. Using 802.1Q VLAN tagging provides the flexibility of using multiple VLANs on a single NIC, thus eliminating the need to have a separate NIC for each VLAN on a host server. For smaller servers, you can get away with using two NICs, but it is best to a have a minimum of four NICs on your host server. If you are using network storage, such as iSCSI, it would be wise to have more than four NICs, especially if you are going to use features like VMware's vMotion. When creating vSwitches it's best to assign multiple NIC's to them for redundancy and increased capacity available to VMs.

Disk Finally, disk resources need to be evaluated. There are many choices available, and which one you should choose will largely be dictated by your budget and if you have a storage-area network, or SAN, available in your environment. Local disk is the cheapest option, but does not allow for advanced features that require shared storage amongst host servers like vMotion. SAN (Fibre Channel) disk is typically the best performing disk solution, but usually one of the most expensive. Network disk is a good alternative and has come close to matching SAN performance. Also, using 15K hard drives will give a performance gain when compared to 10K drives, but it is also important to have larger RAID groups available to help spread the disk I/O across as many drive spindles as possible.

When determining how much disk to buy, make sure you have enough available for all your virtual machines to use, plus an additional 10-20% for additional VM files and snapshots. If you plan on making heavy use of snapshots, you may want to allow for even more disk space. In many cases, a combination of disk resources is used with your hosts, for example, storing development and test VMs on local disk, while keeping production VMs on shared storage.

Typically, you want your virtual machines using at least 80% of the capacity of your host server to maximize your investment. However, leave enough spare capacity for future growth and ensure that enough resources will be available to support additional virtual machines in case of a host failure. It is better to have too much capacity than not enough so that you can avoid constraining your resources and prevent the need to purchase additional host servers.

About the author: Eric Siebert is a 25-year IT veteran with experience in programming, networking, telecom and systems administration. He is a guru-status moderator on the VMware community VMTN forum and maintains VMware-land.com, a VI3 information site.

Read more on Network monitoring and analysis

CIO
Security
Networking
Data Center
Data Management
Close