Book review: How to Measure Anything, by Douglas W Hubbard

Executives in business and government often make decisions that cost shareholders and tax payers billions of pounds, but few know the true value of the outcome. They believe, wrongly, that benefits such as quality, security, reputation, brand, innovation or flexibility cannot be measured.

Executives in business and government often make decisions that cost shareholders and tax payers billions of pounds, but few know the true value of the outcome. They believe, wrongly, that benefits such as quality, security, reputation, brand, innovation or flexibility cannot be measured.

How to Measure Anything, by Douglas Hubbard, is the book for anyone who wants to know how to measure the value of information or any other intangible asset.

Hubbard, formerly a business analyst with accounting firm Coopers & Lybrand, is the inventor of Applied Information Economics (AIE), a method of evaluating choices where the risk and outcome of the decision are uncertain and potentially expensive.

This book, plus the spreadsheets from the companion website, set out the theory and practice of AIE in terms that can be grasped and applied quickly.

The value of new information

Hubbard's thesis is simple: in an uncertain situation, relevant new information reduces uncertainty, and its value is determined by how much it reduces the chance of being wrong times the cost of being wrong.

Thus, if market research could show that a new product feature that costs £100,000 to roll out would improve from 50% to 75% the chance that sales of the product would rise, it would be worth spending up to £25,000 to find out.

Hubbard's point is that the tools actuaries and bookmakers use to determine the monetary value of a human life, a pension fund or the winner of the 3.25pm race at Epsom are equally applicable elsewhere.

In fact, many of us use this approach implicitly when we buy a house close to a "good" school to improve our children's chance of getting a good education. Hubbard aims to allow us to make explicit financial estimates of the value of such outcomes.

Hubbard breaks the book down into three main parts: theory, tools and examples. There are frequent examples from real life, in particular investments in IT systems, that show the theory applied in practice.

In one example, he shows how it was possible to give a monetary value to an improvement in public health derived from software used to monitor water quality. In another example, Hubbard shows how US Marines fighting in Iraq were able to improve their fuel distribution system, reducing risk and saving millions of dollars.

Although the book shows how and when to use various statistical tools, Hubbard reassures readers that it often takes very little analysis to reduce uncertainty.

"Readers just need some aptitude for clearly defining problems," he says. For example, he says, "There is a 93% chance that the median of a population is between the smallest and largest values in any random sample of five from that population."

Hubbard says if we care about a result, we must be able to detect the result. By this, he mean that there must be an observable difference between the before and after situations following a decision. If that is true, then it is detectable as an amount or a range of possible amounts. If that is true, then it can be measured.

Using this system of metrics, the value of IT security quickly evolves into a more precise definition of the actual risk of a threat event happening, plus the costs of the disruption. This can be measured against the cost to detect, prevent and train to avert the threat event.

Hubbard says that people often underestimate how much they already know or can guess about an "intangible". However, with a little training with the supplied exercises, most will quickly sharpen their ability to lay accurate odds.

Measuring variables

Hubbard shows that many variables may affect a situation, but only very few truly count towards the outcome. Those that do are often surprising.

One way to narrow the range of variables is to ask, "If X is true, then what outcome should I see?". Hubbard shows that the economic value of measuring a variable is usually inversely proportional to how much measurement attention it actually gets.

He also shows that the law of diminishing returns quickly affects new information about the key variables. So although he advocates iterative measurements to reduce uncertainty, it seldom takes more than two measurements before the value of new information drops to uneconomic levels.

The simplicity of Hubbard's approach is deceptive. Not only are his tools robust, but what he advocates is by no means impossible to do. The main obstacle is likely to be ingrained beliefs about the measurability of intangibles.

Given what is at stake in strategic decisions, AIE seems worthwhile. Perhaps the first test case for AIE would be to calculate its own value as a decision-making tool.

How to get to grips with intangibles

● If the intangible is really that important, then it will be something you can define. If it is something you think exists at all, then it is something you have already observed somehow.

● If the intangible is something important and uncertain, then you have a cost of being wrong and a chance of being wrong.

● You can quantify your uncertainty with calibrated estimates.

● You can compute the value of additional information by knowing the "threshold" of the measurement, meaning the point at which you might change your decision.

● Once you know what it is worth to measure something, you can put the measurement effort in context and decide on how much effort should be taken.

● Knowing just a few methods for random sampling, controlled experiments or even just improving your ability to make informed judgement calls, can lead to a significant reduction in uncertainty.

Source: How to Measure Anything, Douglas Hubbard

Read more on IT risk management