The demand for data centre capacity worldwide has led to a sharp rise in IT costs and a steady increase in carbon emissions. A new efficiency metric provides companies with a clear yardstick for measuring progress, writes William Forrest, James M Kapland and Noah Kindler.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
The modern corporation runs on data. Data centres house the thousands of servers that power applications, provide information, and automate a range of processes. There has been no letup in the demand for data centre capacity, and the power consumed as thousands of servers churn away is responsible for rising operating costs and steady growth in worldwide greenhouse gases.
Our work suggests that companies can double the energy efficiency of their data centres through more disciplined management, reducing both costs and greenhouse gas emissions. In particular, companies need to manage technology assets more aggressively so existing servers can work at much higher usage levels they also need to improve forecasting of how business demand drives application, server, and data centre-facility capacity so they can curb unnecessary capital and operating spending.
Data centre efficiency is a strategic issue. Building and operating these centres consumes ever-larger portions of corporate IT budgets, leaving less available for high-priority technology projects. Data centre build programs are board level decisions. At the same time, regulators and external stakeholders are taking keen interest in how companies manage their carbon footprints. Adopting best practices will not only help companies reduce pollution but could also enhance their image as good corporate citizens.
A costly problem
Companies are performing more complex analyses, customers are demanding real-time access to accounts, and employees are finding new, technology-intensive ways to collaborate. As a result, demand for computing, storage, and networking capacity continues to increase even as the economy slows. To cope, IT departments are adding more computing resources, with the number of servers in data centres in the United States growing by about 10 per cent a year.
At the same time, the number of data centres is rising even more swiftly in emerging markets such as China and India, where organisations are becoming more complex and automating more operations and where, increasingly, outsourced data operations are located. This inexorable demand for computing resources has led to the steady rise of data centre capacity worldwide. The growth shows no sign of ending soon, and typically it only moderates during economic down cycles.
This growth has led to a sharp rise in IT costs (Exhibit 1). Data centres typically account for 25 per cent of total corporate IT budgets when the costs of facilities, storage devices, servers, and staffing are included. That share will only increase as the number of servers grows and the price of electricity continues its climb faster than revenues and other IT costs. The cost of running these facilities is rising by as much as 20 per cent a year, far outpacing overall IT spending, which is increasing at a rate of 6 per cent.
Spending increases on data centres are reshaping the economics of many businesses, particularly those that are intensive users of information, such as finance, information services, media, and telecoms. The investment required to launch a large-enterprise data centre has risen to $500m, from $150m, over the past five years. The price tag for the biggest facilities at IT-intensive businesses is approaching $1bn. This spending is diverting capital from new product development, making some data-intensive products uneconomic, and squeezing margins. The environmental consequences are also stark, as rising power consumption creates a large and expanding carbon footprint. For most service sectors, data centres are a business's number-one source of greenhouse gas emissions. Between 2000 and 2006, the amount of energy used to store and handle data doubled, with the average data facility using as much energy as 25,000 households.
Already, the world's 44 million servers consume 0.5 percent of all electricity produced, with data centre emissions now approaching levels of those of entire countries such as Argentina or the Netherlands. In the United States alone, growth in electricity used by data centres between now and 2010 will be the equivalent of 10 new power plants. Without efforts to curb demand, current projections show worldwide carbon emissions from data centres will quadruple by 2020.
Regulators have taken note of these developments and are pressing companies for solutions. In the United States, the Environmental Protection Agency (EPA) has proposed that large data centres use energy meters as a first step toward creating operating-efficiency standards. The European Union, meanwhile, has issued a voluntary code of conduct laying out best practices for running data centres at higher levels of energy efficiency. Government pressure to reduce emissions will likely increase as data centre emissions continue to rise.
In information-intensive organisations, decisions affecting the efficiency of data centre operations are made at many levels. Financial traders choose to run complex Monte Carlo analyses, while pharmaceutical researchers decide how much imaging data from clinical trials they want to store. Managers who develop applications decide on how much programming it will take to meet these demands. Those managing server infrastructure decide on equipment purchases. Facilities directors decide on data centre locations, power supplies, and the time frame for installing equipment ahead of predicted demand
A longer version of this article can be found here
William Forrest is an associate principal in McKinsey's Chicago office. James Kaplan, a principal in the New York office, leads the technology infrastructure practice for the global IT group. Noah Kindler is a consultant in the New York office.
• The authors would like to recognise the important contributions of Kenneth Brill and The Uptime Institute to the development of this article and its recommendations. The Institute provided insight based on many years of experience, as well as proprietary data and analysis.