(If you haven’t already read them, you might like to take a look at Part 1: The webstats legacy and Metrics, Part 2: Are we measuring the right things?)
It’s never been more true to say that just because we can measure something it doesn’t mean we should. The temptation to amass as many stats as possible about our social media projects, in the hope that somewhere in the numbers lies enlightenment, is almost irresistible. Instead, we need to do the opposite: Measure only the things that can tell us something useful. And some of those measurements may not actually come from social media at all.
To know what to measure, we first need to understand the strategic goals of the project. This is the 60,000 ft view, the “We want increased profitability” or “We want to be more productive” view. These aren’t easily measured directly. Profitability, for example, may be improved by a whole host of actions taken by the company as well as by market forces, so teasing out which bit is down to a specific social media project could be very difficult.
Instead, strategic goals provide us with a context for tactical goals. Increased productivity, for example, may mean decreasing email use, decreasing hours spent in meetings, improving collaboration, improving communication, decreasing duplicated projects, and improving employee engagement.
Of these tactical goals, some are easier to measure than others. Leisa Reichelt has written a great post on the importance of measurement and criteria for success in which she says:
Some success criteria are immediately apparent and easy to measure, for example return visitors, increased membership, activity or sales. Ideally you want to put some numbers around what you’d consider would define this project as ‘successful’, but even just identifying the metrics that you will use to judge the success of the project is a good start.
Some success criteria are less easy to ‘measure’ but don’t let that discourage you. Often for these kinds of criteria I’ll use a round of research to determine whether or not we’ve been successful – those things that are difficult to quantify are often quite easy to examine using qualitative research. I find myself more and more using a last round of research to ‘check off’ the less quantifiable success criteria for projects.
I think of these two types of success criteria as objective and subjective:
- Objective criteria map fairly cleanly to something you can measure. For example, you can measure how many emails are sent and received and so can see if your social media project is reducing email flow.
- Subjective criteria do not map cleanly to any metric. For example, it’s hard to define, let alone measure, collaboration.
Sometimes one can get creative around subjective criteria and create a new metric that can shed light on matters, but often there isn’t much more than gut feeling to go on. In that case, it is worth asking our gut how it feels on a regular basis so that we can at least look back dispassionately rather than trying to remember how things felt six months ago. (More on this in a later post.)
For all measures, it’s important to understand what the numbers are really telling you and to discard any measurements that could be in any way misleading (cf Part 2).
A good workflow for this whole process might be:
- Set out strategic and tactical goals
- List objective and subjective criteria for success
- Map criteria to measurable metrics
- Discard misleading metrics
- Discard unimportant metrics
- Identify desired trends
- Start measuring
One word of warning: Beware numerical targets. It’s often not possible to know how big of a change you need to create in order to meet your goals. And in many cases, social tools scale best when they scale slowly. Rapid change can even destroy the very thing you’re trying to create (especially when you’re looking at community building). Numerical targets are often nothing better than fairytales that may or may not one day resemble reality.
The final thing to remember is to start taking measurements before the project launches. It might seem like a no-brainer, but in my experience it’s common for companies to forget that without information on starting conditions, there’ll be nothing to compare to.