woman with measuring tape
Editorial

Intranet Measurement Isn't Simple Cause and Effect: Find the Metrics That Matter

8 minute read
Chris Tubb avatar
SAVED
Measurement is key to identifying where and how to improve our intranets. But how do you measure when there's often no direct cause-effect link?

Measurement goes wrong when it lacks a purpose. When measuring the effectiveness of your intranet or digital workplace, it's important to link your activities with the beneficial outcome you are seeking. By analyzing how strong your effect is on these goals, you'll reach a better understanding of how to track it. 

Intranet and digital workplace practitioners work in a complex world. Internal communications, collaboration, knowledge management and findability do not often operate in a simple domain where the inputs directly result in expected outputs: we deal with fallible humans after all.

For example, providing internal communications with engaging, people-focused stories can help create an open and engaging mood within an organization, but it isn't the only thing that will impact the mood, and it can't be viewed as the only thing responsible for creating this mood. Therefore you should measure the effectiveness of the communication and not worry about the outcome: you know you're making a contribution. 

But say instead you are providing and promoting a new meeting room booking tool. In this case you would expect to be able to detect our change in the graph of meeting room utilization. As well as looking at your adoption stats, you can track what you are achieving in the real world — be it success or failure.

Editor's Note: This is the third and final in a three-part series about improvement cycles. Read the first part here and the second here.

Effective Intranet Improvement Cycles

We’ve spent some time developing a methodology to help create improvement cycles. Improvement cycles are a management tool you can employ to get to a known goal more effectively. This post will explain how that is done, but first let's define a few terms and discuss why measurement is overrated.

measurement plan

Defining Measures, Proxy Measures and Indicators

We call the data-driven part of the improvement cycle “track,” not “measure.” That’s because you can use many things in improvement cycles that aren’t actually measures you can use to assess progress.

Measures

A “measure” measures the outcome directly. Examples in your world could include:

  • Cost savings.
  • Meeting room or desk utilization.
  • Travel costs.
  • Headcount.
  • Call avoidance.

These are real things that happen in the real world to time and money (and very occasionally competitive advantage). An intranet or digital workplace initiative can rarely claim to be the major cause of these outcomes, but it's not unknown. A good HR site that is provided hand-in-hand with call center service desk can track call avoidance: which calls should have been dealt with online?

Proxy Measures

Mostly you'll deal with so-called proxy measures. These are things you can measure because you can’t measure what you really want. For example, when you publish a news article on the intranet you don’t want stats on page views, time on page or bounce rates. You want to understand whether hearts have been gladdened, minds enlightened or ideas understood. However, in the absence of wiring all employees up with mind-reading helmets, this isn't possible. So instead you use some page stats as a proxy to the opportunity that reading the article offers. Examples of proxy measures include:

  • Usage — How is a resource being used?
  • Adoption — How are people choosing whether or not to use a tool or practice?
  • Reach — How widely can something be theoretically used or adopted?

Virtually everything you deal with here can be categorized as usage and adoption:

  • Intranet page views, bounce rates, time on page and site and scroll depth.
  • Posts, replies and file uploads.
  • Likes and comments.
  • People profile edits, completions and updates.

Indicators

The objective of improvement cycles is to make interventions to make things better. A lot of human experience can’t be described using numbers, or has numbers assigned to it when they shouldn’t be. The word “opinion” has got a bad rap: it isn’t considered scientific or business-like, criticizing it as "subjective." Yes, and so what? We are humans, living in human societies and opinion arrived way before numbers and measurement.

Likes, comments, survey results, feedback, post mortems and success stories are all valid forms of input, to assess whether things are getting better or worse and to give you a clue about what you can do about it. Occasionally these can be put into numbers, but they remain encapsulations of human opinion.

Learning Opportunities

I would go even further. Using an improvement cycle, you can formalize your own opinions or those of your stakeholders into "Qualitative evaluation.” You sit down in your improvement cycle review meeting and bring together what you know implicitly. This is what people do instinctively and we evolved to do it. If your projects are too complex — too human! — to measure accurately, use your collective social skills to make the call.

Related Article: Social Enterprise ROI: Measuring the Immeasurable

Segmentation Adds Context

One of the reasons people struggle with measurement is they don’t know what the numbers mean. Is a news story with 450 page views good or bad? Numbers are meaningless without context. You only know that 100 miles per hour is fast because you can compare it to driving. It’s dangerously slow if you are an airline pilot! 

You need to be able to compare measures or indicators to get them to make sense. We call this segmentation, where you break down a set of people, places or things into different groups:

  • People: individuals, teams, communities or business units.
  • Places: countries, regions, sites or sections.
  • Things: articles, posts, pages or documents.

Comparison reveals meaning. It gives you an idea or scale. It gives numbers a dimension: imagine a straight list of the top news stories. Some have high page views, some have low page views. If you break them out into four topics for “people stories,” “product stories,” “C-suite stories” and “to-action,” I bet you'd see patterns about about how popular in aggregate each one of those types are. The objective here is to reveal the reasons behind differences.

Operating Levers of Change

You don’t capture this data out of curiosity or because you are a butterfly collector. You are here to make changes and make interventions. So what can you change inside the plan in response to the data? You can change many of variables once your plan is in play:

  • Changes around people: Providing communications, training, coaching, guidance and support.
  • Changes around management: Sponsorship, policy and incentives.
  • Changes around practice: Work methods, editorial policy, design choices and available tooling.

Related Article: Avoid Intranet Pitfalls by Delivering Fast and Thinking Slow

Creating an Improvement Plan: An Example

Say you are planning a collaboration tool roll-out and you are worried it won’t be adopted. Adoption is a proxy for what you really want, which is better collaboration, but that is too high-falutin’ to measure directly. You decide to monitor a bundle of proxy measures about how project teams are adopting the new spaces:

  • Number of members.
  • Number of posts.
  • Number of replies.
  • Number of file uploads.

You plan to review this once a month, and the administration back-end of your collaboration tool will spit out a report.

You decide to compare how each of the project teams is adopting the tool using the proxy measures above (segmentation) and you decide on the following levers of change:

  • Providing extra guidance.
  • Providing extra training.
  • Sponsorship from the project management office.
  • Withdrawal of competing older tools.

You start operating your plan. Numbers are on the up but tend to stall eight weeks after teams launch, yet three or four really good project teams are doing way better. On investigation you find the super-successful ones have got super-helpful project managers who are championing the change and patiently doing quick training sessions. You ready your intervention, and schedule in a training refresher online meeting for each project at eight and 12 weeks after launch. Subsequently, projects spaces catch on more effectively.

Implementing Your Plan

So there we have it: how to come up with an improvement plan which will:

  • Make measurement purposeful again.
  • Help you implement change effectively after launching something.
  • Stop business as usual becoming a boring, meaningless grind.

Naturally your plan should be within your control and aiming for improvement, but other than that, just start. Like any new habit, you’ll need to keep trying. It’s like dieting or going to the gym. It is going to make you better and more fit, but it is hard sometimes when you get stressed or busy. If you miss a cycle, don’t stop. Just pick it up again and carry on. 

Each time something gets better or improves because of your efforts, note it down as a little victory. Organizations (and bosses) think well of those who can prove their successes. Very occasionally that comes immediately, but more often it comes slow and methodically to those with a plan.

fa-solid fa-hand-paper Learn how you can join our contributor community.

About the Author

Chris Tubb

Chris Tubb is an independent digital workplace and intranet consultant based in the UK. As well as his own consulting practice, Chris recently launched a training and development company called Spark Trajectory, that seeks to equip intranet and digital workplace teams with the skills they need to solve 80 percent of their problems: strategy, governance and measurement. Connect with Chris Tubb:

Main image: Luo ping