Editor’s note: This is the third in a series of four articles on server hardware equipment refreshes. Part one detailed strategies for getting the timing right; part two showed you how to garner support for the project; and part three coaches you about hardware selection in a rapidly changing technology environment.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
In today’s economic climate, business stakeholders expect that proposed IT infrastructure and services will be cost effective and economical—and reduce future operational costs. Unfortunately, critical factors during a server refresh can waste money, hamper ROI and increase TCO.
Three or four years ago, data centre managers had fewer server platform choices, and, thus, the server landscape was less confusing and prone to waste. Fast-forward to today, with myriad server hardware platforms and form factors. Emerging technologies have produced an immense array of models, all of which are designed for certain workloads and requirements that can make the selection process daunting. In addition, the vast array of available CPU models and sizes can further confuse the selection process.
There are, however, strategies that can help IT managers through the process of buying the best server platform for their needs. Below, we’ll cover platform selection, aspects of virtualisation, choosing heterogeneous hardware over one vendor, processor selection and the importance of server building blocks.
Server refresh projects are an opportune time to review whether an alternative form factor would be more beneficial, and the selection should ensure that the product is standardised. For example, if you opt to go with a blade server platform, then strongly consider implementing a standards policy that stipulates blade servers as the standard.
Most organisations now deploy server virtualisation, so be sure you have a platform that meets the functional requirements of a virtualised estate.
Virtualisation and the right hardware
A key driver for refreshing server hardware is that you’ll be able to replace aging hardware with newer servers that have chipset-level functionality for greater virtualisation scope. When refreshing older virtualisation hosts, your platform design plan should ensure that servers are appropriately specified for virtualising more intensive workloads.
From a technical perspective, consolidation of more resource-intensive workloads requires additional hardware resource components to support virtual machines. So be sure to choose a platform that supports greater consolidation ratios and additional growth.
Select a platform that has evenly balanced CPU and RAM sizing. Without this balance, you run the risk of having insufficient amounts of I/O operation to facilitate resources to virtual machines within a single host. On paper, a single CPU core with eight logical cores may seem to offer performance similar to your original eight-physical-CPU server. But as you review things more closely, you’ll find that you have an uneven balance within sizing parameters. So you must design hardware with additional components to assure balance.
Storage is also crucial and deserves its own design process to ensure that the storage platform can satisfy workload requirements. You may need to utilise an enterprise flash disk and different RAID configurations to satisfy this high IOP requirement.
As with storage design and connectivity, networking connectivity within highly dense virtualisation requires an investment in much higher-bandwidth interface cards and backend network throughput to support higher demands. Consolidation may also require network cards that can support direct hardware access from virtual machines (VMs).
Heterogeneous hardware versus one vendor
It can be extremely difficult to get the best value from suppliers if you don’t have a standardised baseline for comparing prices.
Server vendor selection can be treacherous, and weighing the pros and cons makes the choice even more difficult. Strategic decisions can also be influenced by how an organisation’s IT and the business is structured. Some companies may not have a centralised architectural function and it may well be siloed. When selecting hardware, do not lose sight of organisational considerations.
External influences may have a bearing on strategic selection such as a single-vendor procurement framework agreement that would more or less dictate a single-vendor strategy. At the other extreme, in larger organisations or groups of companies, IT may be organised in a decentralised fashion where each IT division is allocated its own budget to spend on whichever hardware vendor they feel is suitable. Some IT departments may be agile enough to juggle different hardware and management tools, or IT may have chosen to adopt open source management tools that are hardware-agnostic and rely on open-standard management interfaces.
Whether data centre managers are considering a server refresh requirement for a physical or a virtualised environment, they should always draft a governance policy that outlines why they have selected such a strategy.
Managers should educate and gain approval and demonstrate the benefits of their selected strategy to stakeholders; after all, stakeholders will inevitably decide on the approach because of budget requirements. When educating the stakeholder, describe the benefits in terms they’ll understand.
Establishing a single-processor vendor policy is important for performance and functionality. It also ensures technical compatibility and reduces OS library definition and maintenance. In virtual environments, processor selection is key for compatibility; single-vendor CPUs are still required for effective live migration between hosts. In sum: To truly gain the benefits of a dynamic server virtualisation environment, a policy of a standardised CPU model must be strictly enforced.
The technology selection phase may be the time to evaluate whether migration to an alternative chipset makes sense. Consider the degree of effort and disruption in migrating from vendor A to vendor B. In virtualised environments, consider architectural design factors such as overall cluster configuration and design.
Server building blocks enable you to add or modify a unit’s functionality, such as unit size or height, capacity CPU total, and amounts of RAM.
Building-block libraries simplify server platform selection for infrastructure design phases, procurement activity and OS build details. Building blocks also clarify server options and help benchmark cost. It can be extremely difficult to get the best value from suppliers if you don’t have a standardised baseline for comparing prices.
A building-block offering typically comprises three to four server models. When defined and agreed upon, building blocks can be published as the standard for all teams. Building-block approaches can benefit the architectural design teams through a single common image based on the standardised server hardware. Lastly, it can greatly benefit data centre facility teams by establishing key known measurements, such as power, rack space and cabling requirements.
Define the building blocks from a smaller low-density sizing metric and work towards medium-density building-block sizing. This isn’t to say you shouldn’t compile documented building blocks for larger servers, but treat them as an exception.
Daniel Eason is a UK-based infrastructure architect at a multinational company. He also has a personal technology blog: http://www.vmlover.com.