Recently in Measurement Category

Life is not a marathon, it's a series of sprints

| More

If you can get past the slightly rambling intro, this conversation between Jonathan Fields and Tony Schwartz is a fascinating look at what's wrong with the way we currently tend to work. It really starts to get interesting about 8 minutes in.

Although very focused on American business and culture, pretty much everything they say relates to British and European work culture.

One important idea they discuss, and something I've found essential myself, is the idea of pulsing or sprinting when working: to focus for a while and then relax for a bit. This idea is common in athletics, where it's called the work-rest ratio: "It's as important to renew energy as it is to spend energy if [you] want to be a consistently great [athletics] performer."

We forget too easily that the brain is an organ that requires periods of replenishment as much as muscles do. If you work your muscles too hard, they ache, so we learn very early on not to overdo it. Yet we expect our brain to perform at maximum capacity, consistently, throughout our workday. It's just not possible, yet we don't allow for this fact in the way that we work.

Schwartz also says, "It's not the number of hours people work that matters, it's the value they produce during the hours they work, so stop worrying about how many hours that person spends at their desk, and start figuring out, What can I do to help this person design his life so that when he's working or she's working, she's really working?"

To me this is the essence of what social media is in business is all about. We, as humans, work better when we are socially connected. It fulfils a fundamental human need to be part of a group whose whole is bigger than the sum of its parts. Social media also provides ways to communicate and collaborate more effectively and more easily, to benefit from the wisdom in the crowd. As we become more enmeshed in our community, so our ability to solve problems by drawing upon the resources of that community increases.

Social media is, at the moment, only doing a fraction of what it could for business. It's an area full of potential and as we start to marry technology, psychology, business and human nature together, we are beginning to find ways to unlock our potential, not just as individuals but as members of a huge social gestalt.

Most businesses using social media at the moment are dabbling, going for the easy, obvious wins like marketing or some internal Wikipedia clone. We need more business executives to be brave, to think about their business as a multi-human organism that has its own needs and that isn't being properly fed by current business practices and cultures.

When I look at what could be done, how we could use social media to really change our work environments in to something more effective, more enjoyable, I really do think we have a long, long road ahead of us. Change is often slow and incremental. We need some businesses to take a deep breath and leap, to remake their internal culture, to be more human, using social media as the agent of change.

But ultimately, I think what we'll see is the old cultures dying off as new, nimble, socially aware businesses rise up in their stead. This new era of socially capable business is only just now dawning.

Conflict of interest: Success vs the user

| More

I'm very wary of what sort of metrics and definitions of success are used to decide whether a project is working or not. To often, the wrong metrics and definitions are used, resulting in bad managerial decisions that are based on flawed assumptions.

A couple of good posts about how metrics and definitions of success (and, therefore, business models) can work against the user: OKCupid talks about why you should never pay for online dating, and Joshua Porter points out a paragraph in one of Mike Davidson's posts which explains why companies' iPhone/iPad apps are often better than their websites. In short, on a mobile app they don't have the opportunity to finagle the user experience to artificially bump up their metrics.

In both cases, you have a situation where the metrics and definitions of success upon which the business model relies distort the user experience by forcing them to take actions which are not necessarily in their best interests. Indeed in these cases, a swift and satisfying experience for the user is damaging to the business providing it.

When you're putting together a social media project, think first about what the most beneficial outcome for your users would be. Then figure out it can form the basis of a business model (hint: your income/ROI may be orthogonal to your desired user outcome) and then how that can be measured.

Do not start with a metric, build a business model on top of it, and then force the user to have a shoddy experience for the sake of your bottom line. And yes, this applies just as much to enterprise social media as any other sort. Don't start thinking that 'number of edits' on a wiki is a definition of success, because that just means you'll push people into more pointless editing and will take your focus of signs of real success, e.g. people being able to achieve their goals more quickly and more efficiently.

Fun makes for passionate users

| More

How much enterprise software is truly fun to use? Aarron Walter discusses the importance of fun in his article Emotional Interface Design: The Gateway to Passionate Users. It's a very interesting read with some enlightening examples.

But to take the ball and run with it a bit, I think 'fun' is one reason that people who use social media can get so passionate about it. We engage much more with tasks that are fun and enjoyable, and we work better on projects where we are working with people who are fun. Just think about the tasks on your to-do list, and think about the ones that you find fun. I bet they're the ones you actually want to do!

For me, blogging is fun. Working on a wiki is fun. Setting up a Kickstarter project is fun. Heaven forfend, but I even like playing with numbers in spreadsheets on Google Docs. (Don't tell anyone, but I love setting up spreadsheets with formulas that suck data from one cell, transform it in some way and then spit out a number in another.)

Putting my numbergeekiness aside, the one thing those tools have in common is the presence of other people. The fun to be had in writing a blog increases the more other people engage with it. Wikis are both productive and fun when you're working with other people on achieving a shared goal. Kickstarter is fun not just because it offers the opportunity to do cool projects, but because you're doing that cool project with the support of other people. GoogleDocs allow me to collaborate with other people and even discuss the document in real time whilst we're working on it.

Other people make things fun. Fun things are things we want to do, and keep on doing. The more we want to do something, the better we get at doing it. The more we enjoy a task, the better we get at doing it, the more efficient and productive we becomes.

Which begs the question: Can we make work more fun? Of course we can. And we should.

The Tyranny of the Explicit

| More

Johnnie Moore has a great podcast episode talking with Viv McWaters and Roland Harwood on how an undue focus on metrics can get in the way of real thought and understanding. I see this frequently myself, too, when people want to focus on 'return on investment' or 'success metrics' for social media at the cost of understanding the intangible results, which are actually more important than the measurable ones. There are some great nuggets, so well worth listening to. I particularly liked Johnnie's discussion of how learning has become codified in unrealistic ways and how that relates to best practice documents that don't get practised.

The lure of the partial post

| More

Friend and colleague Stephanie Booth writes about the blogazine, which I've covered here already, and the frustration she feels when faced with blogs that only post excerpts to their front page (and, I'd add, RSS feeds). I want to pick up on the point about partial posts and want to say in no uncertain terms:

Partial posts or excerpts are bad practice.

They are bad practice for media outlets, but they are especially bad practice for business blogs. As Steph says, partial posts put a barrier between your content and your readers and although it's a low barrier, just a click high, it's still a barrier. Trying to artificially inflate page views by forcing people to click through from the front page, or from RSS, is nothing more than an attempt to fake greater popularity. It doesn't mean that you actually have more readers, just that they have to click twice. Like Steph, I seriously doubt that it makes any difference to SEO, and if you're willing to sacrifice user experience for a potentially tiny bump in your search engine ranking, what does that say about how you treat your customers?

Metrics, Part 4: Subjective measurements

| More

(If you haven't already read them, you might like to take a look at Part 1: The webstats legacy, Metrics, Part 2: Are we measuring the right things?) and Metrics, Part 3: What are your success criteria?)

In the last instalment of this series I mentioned that sometimes there just aren't objective metrics that we can use to help us understand the repercussions of our actions. Yet much of what we try to achieve with social media projects is exactly this sort of unmeasurable thing.

No amount of understanding of page views, for example, is going to tell us how the people who have viewed that page feel about it. Did they come because they were interested? Or because they were outraged? Is your comment community a healthy one or a pit of raging hatred? Are your staff better able to collaborate now you have a wiki or are they finding it difficult to keep another datastore up to date?

There are two ways round this:

  • Surveys
  • Subjective measurement scales

Surveys are sometimes the only way you can get a sense for how well a social media project is going. All the metrics in the world won't tell you if your staff are finding their internal blogs useful or burdensome. Random anecdotes are liable to mislead as you'll end up relying on either the vocal evangelists who will give you an overly rosy picture, or the vocal naysayers who will give you an overly pessimistic picture. The truth is likely to be in the middle somewhere, and the only way that you can find out where is to ask people.

Survey questions need to be very carefully constructed, however, to ensure that they are not leading people to answer a certain way. At the very least, make sure that questions are worded in a neutral way and that you cover all bases for the answer options you give. Test and retest surveys as it's so easy to get something crucial wrong!

The second way to try and measure subjective metrics is to create a scale and regularly assess activity against that scale. If you were assessing the comments in your customer-facing community, for example, you might consider a scale like this:

★★★★★.....Lively discussion, readers are replying to each other, tone is polite, constructive information is shared
★★★★.........Moderate amount of discussion, readers replying to each other, tone is polite, some useful information shared
★★★.............Little discussion, readers reply only to author, tone is mainly polite, not much information shared
★★.................Discussion is moribund OR Tone of discussion negative, tone is impolite, no information shared
.....................Abusive discussion OR Discussion is just a torrent of "me too" comments
.....................No discussion

The idea here isn't to create an enormous cognitive load but to try and have a consistent understanding of what we mean when we rate something 3 out of 5. This means keeping scales clear and simple, and avoiding any ambiguity such as language which could be misunderstood or which has an inherent value judgement that could sway an assessment.

I would also suggest that valuable data would be compiled by having a varied group of people rating on a regular basis and then averaging scores. That would hopefully smooth out any variation in interpretation of the scale or personal opinion.

Again, I'm going to stress that both these methods need to be put in place and measurement started before a project begins. Thinking ahead is just so worth the effort.

In all honesty, I've never had a client do either surveys or subjective scales. Mainly because none of them have ever really given enough thought to metrics before they start a project. It's a shame because with services like Survey Monkey, it's really not hard to do.

Metrics, Part 3: What are your success criteria?

| More

(If you haven't already read them, you might like to take a look at Part 1: The webstats legacy and Metrics, Part 2: Are we measuring the right things?)

It's never been more true to say that just because we can measure something it doesn't mean we should. The temptation to amass as many stats as possible about our social media projects, in the hope that somewhere in the numbers lies enlightenment, is almost irresistible. Instead, we need to do the opposite: Measure only the things that can tell us something useful. And some of those measurements may not actually come from social media at all.

To know what to measure, we first need to understand the strategic goals of the project. This is the 60,000 ft view, the "We want increased profitability" or "We want to be more productive" view. These aren't easily measured directly. Profitability, for example, may be improved by a whole host of actions taken by the company as well as by market forces, so teasing out which bit is down to a specific social media project could be very difficult.

Instead, strategic goals provide us with a context for tactical goals. Increased productivity, for example, may mean decreasing email use, decreasing hours spent in meetings, improving collaboration, improving communication, decreasing duplicated projects, and improving employee engagement.

Of these tactical goals, some are easier to measure than others. Leisa Reichelt has written a great post on the importance of measurement and criteria for success in which she says:

Some success criteria are immediately apparent and easy to measure, for example return visitors, increased membership, activity or sales. Ideally you want to put some numbers around what you'd consider would define this project as 'successful', but even just identifying the metrics that you will use to judge the success of the project is a good start.

Some success criteria are less easy to 'measure' but don't let that discourage you. Often for these kinds of criteria I'll use a round of research to determine whether or not we've been successful - those things that are difficult to quantify are often quite easy to examine using qualitative research. I find myself more and more using a last round of research to 'check off' the less quantifiable success criteria for projects.

I think of these two types of success criteria as objective and subjective:

  • Objective criteria map fairly cleanly to something you can measure. For example, you can measure how many emails are sent and received and so can see if your social media project is reducing email flow.
  • Subjective criteria do not map cleanly to any metric. For example, it's hard to define, let alone measure, collaboration.

Sometimes one can get creative around subjective criteria and create a new metric that can shed light on matters, but often there isn't much more than gut feeling to go on. In that case, it is worth asking our gut how it feels on a regular basis so that we can at least look back dispassionately rather than trying to remember how things felt six months ago. (More on this in a later post.)

For all measures, it's important to understand what the numbers are really telling you and to discard any measurements that could be in any way misleading (cf Part 2).

A good workflow for this whole process might be:

  • Set out strategic and tactical goals
  • List objective and subjective criteria for success
  • Map criteria to measurable metrics
  • Discard misleading metrics
  • Discard unimportant metrics
  • Identify desired trends
  • Start measuring

One word of warning: Beware numerical targets. It's often not possible to know how big of a change you need to create in order to meet your goals. And in many cases, social tools scale best when they scale slowly. Rapid change can even destroy the very thing you're trying to create (especially when you're looking at community building). Numerical targets are often nothing better than fairytales that may or may not one day resemble reality.

The final thing to remember is to start taking measurements before the project launches. It might seem like a no-brainer, but in my experience it's common for companies to forget that without information on starting conditions, there'll be nothing to compare to.

Metrics, Part 2: Are we measuring the right things?

| More

(If you haven't already read it, you might like to take a look at Part 1: The Webstats Legacy.)

Anand Giridharadas asks in the New York Times, Are metrics blinding our perception?. Giridharadas begins by talking about the Trixie Telemetry company which takes data about a baby's naps, nappy changes and feed times and turns it into charts, graphs and analyses to "help parents make data-based decisions". He then goes on to say:

Self-quantification of the Trixie Telemetry kind is everywhere now. Bedposted.com quantifies your sexual encounters. Kibotzer.com quantifies your progress toward goals like losing weight. Withings, a French firm, makes a Wi-Fi-enabled weighing scale that sends readings to your computer to be graphed. There are tools to measure and analyze the steps you take in a day; the abundance and ideological orientation of your friends; the influence of your Twitter utterances; what you eat; the words you most use; your happiness; your success in spurning cigarettes.

Welcome to the Age of Metrics -- or to the End of Instinct. Metrics are everywhere. It is increasingly with them that we decide what to read, what stocks to buy, which poor people to feed, which athletes to recruit, which films and restaurants to try. World Metrics Day was declared for the first time this year.

But measure the wrong thing and you end up doing the wrong thing:

Will metrics encourage charities to work toward the metric (acres reforested), not the underlying goal (sustainability)? [...] Trees are killed because the sales from paper are countable, while a forest's worth is not.

The same is true in social media. Count the wrong thing and you'll do the wrong thing. As Stephanie Booth says, in the second video in this post:

As soon as you start converting behaviours into numbers then people adapt their behaviour to have good numbers.

She goes on to say that some of her clients believe that the number of comments they have on a blog post is a measure of success, but because of this they become obsessed with getting people to comment:

So you're going to write posts which make people react or you're going to encourage people to have chatty conversations in your comments. That's really great, you get lots of comments, but does it mean that what you're providing is really more valuable? [...] I don't believe that more is always better, that more conversation is always better. It's "Is it relevant?" And that's something that we do not know how to measure in numbers.

If the key metric for assessing success is a simplistic one like 'page views' or 'unique users' or 'comments', the emphasis in your web 2.0 strategy will be on creating something populist instead of something that meets a business need.

Let's say you're in eCommerce and you sell pet supplies. Your business goal is not 'get more people onto our website', it is 'get more people buying pet supplies from our website'. The two are very different indeed. A company that believes that they need to just lots and lots of people through the virtual door will focus on anything that might get them more attention and traffic. A company that understands they need to attract the right people will focus on communicating with passionate pet lovers who arrive at the site primed to buy.

This is why niche blogs can command higher advertising rates than general news sites. Advertisers can see that more of the people who click their ads will actually buy their products and are willing to pay more for these higher quality visitors.

Equally, let's say you want to 'improve collaboration' internally and to that end you start a wiki. You start measuring activity on the wiki and focus on 'edits per user' as a key metric. You encourage people to edit more, but the quality and amount of collaboration doesn't increase as you expected. Why? Because people learnt that changing a single typo boosts their 'edits per user' count and took a lot less effort than creating a new page, engaging with a co-worker or making a substantive change. Focusing on the wrong numbers changes the wrong behaviour.

In order to think about metrics, you need to know exactly what you're using social media for. Figure that out and you're halfway there.

Metrics, Part 1: The webstats legacy

| More

Probably the hardest part of any social media project, whether it's internal or external, is figuring out whether or not the project has been a success. In the early days of social media, I worked with a lot of clients who were more interested in experimenting than in quantifying the results of their projects. That's incredibly freeing in one sense, but we are (or should be) moving beyond the 'flinging mud at the walls to see what sticks' stage into the 'knowing how much sticks' stage.

Social media metrics, though, are a bit of a disaster zone. Anyone can come up with a set of statistics, create impressive-sounding jargon for them and pull a meaningless analysis out of their arse to 'explain' the numbers. Particularly in marketing, there's a lot of hogwash spoken about 'social media metrics'.

This is the legacy of the dot.com era in a couple of ways. Firstly, the boom days of the dot.com era attracted a lot of snakeoil salesmen. After the crash, businesses, now sceptical about the internet, demanded proof that a site really was doing well. They wanted cold, hard numbers.

Sysadmins were able to pull together statistics direct from the webserver and the age of 'hits' was born. For a time, back there in the bubble, people talked about getting millions of hits on their website as if it was something impressive. Those of us who paid attention to how these stats were gathered knew that 'hits' meant 'files downloaded by the browser', and that stuffing your website full of transparent gifs would artificially bump up your hits. Any fool could get a million hits - you just needed a web page with a million transparent gifs on it and one page load.

This led to the second legacy: an obsession with really big numbers. You see it everywhere, from news sites talking about how many 'unique users' they get in comparison to their competitors to internal projects measuring success by how many people visit their wiki or blogs. It's understandable, this cultural obsession with telephone-number-length stats, but it's often pointless. You may have tens of thousands of people coming to your product blog, but if they all think it's crap you haven't actually made any progress. You may have 60% of your staff visiting your internal wiki, but if they're not participating they aren't going to benefit from it.

Web stats have become more sophisticated since the 90s, but not by much. Google Analytics now provides bounce rates and absolute unique visitors and all sorts of stats for the numerically obsessed. Deep down, we all know these are the same sorts of stats that we were looking at ten years ago but with prettier graphs.

And just like then, different statistics packages give you different numbers. Server logs, for example, have always provided numbers that were orders of magnitude higher than a service like StatCounter which relies on you pasting some Javascript code into your web pages or blog. Even amongst external analytics services there can be wild variation. A comparison of Statcounter and Google Analytics shows that numbers for the same site can be radically different.

Who, exactly, is right? Is Google undercounting? StatCounter overcounting? Your web server overcounting by a factor of 10? Do you even know what they are counting? Most people do not know how their statistics are gathered. Javascript counters, for example, can undercount because they rely on the visitor enabling Javascript in their browser. Many mobile browsers, for example, will not show up because they are not able to run Javascript. (I note that the iPhone, iTouch and Android do show up, but I doubt that they represent the majority of mobile browsers.)

Equally, server logs tend to overcount not just because they'll count every damn thing, whether it's a bot, a spider or a hit from a browser, but also they'll count everything on the server, not just the pages with Javascript code on. To some extent, different sorts of traffic will be distinguished by the analytics software that is processing the logs, but there's no way round the fact that you're getting stats for every page, not just the ones you're interested in. Comparing my server stats to my StatCounter shows the former is 7 times the latter. (In the past, I've had sites where it's been more than a factor of ten.)

So, you have lots of big numbers and pretty graphs but no idea what is being counted and no real clue what the numbers mean. How on earth, then, can you judge a project a success if all you have to go on are numbers? Just because you could dial a phone with your total visitor count for the month and reach an obscure island in the Pacific doesn't mean that you have hit the jackpot. It could equally mean that lots of people swung past to point and laugh at your awful site.

And that's just web stats. Socal media stats are even worse, riddled with the very snakeoil that web stats were trying to mitigate against. But more on that another day.

About this Archive

This page is a archive of recent entries in the Measurement category.

Marketing is the previous category.

Microblogging is the next category.

Find recent content on the main index or look in the archives to find all content.

Archives