Research & Development

Posted by Jiri Jerabek on , last updated

What we learned about validating concepts

During the discovery phase of an innovation project when the problem space is explored and the needs and behaviours of the target audience mapped, the project team will aim to distil the best ideas into concepts. At this point they inevitably face a challenge: how to validate whether the concepts will be understood by the target audience; whether people will find them useful and crucially whether these concepts can find a place in the users’ daily lives after the initial excitement from a new thing fades away. This blog post gives an insight into how our Internet Research and Future Services team in BBC R&D approached this challenge on a recent project and what we learned along the way.

From problem-space exploration to a concept

Many R&D projects start with a very open brief so project teams will spend much of this time exploring the problem space. In the content discovery project (still under wraps) I am using as an example here, the research we conducted allowed us to understand the needs of the target audience, draw insights from people’s behaviour and ultimately synthesise these findings with our domain expertise and knowledge.

This inspired initial ideas that the team evaluated in the form of sketches, storyboards and low-fidelity interactive prototypes. Through the process of evaluation and iteration the project team developed two concepts in the form of semi-interactive prototypes. Although the concepts’ hypotheses were based on our domain expertise and findings from our research we were aware of a number of assumptions.

In the spirit of Lean Startup the team decided to validate whether people would use our propositions before we invested too much resource into building a fully functional alpha prototype. There are several ways to find out if a service or a product will find its place in people’s lives and whether people will keep using it after the initial interest fades away. During the iterative phase of development we chose to evaluate our concepts frequently with informal guerrilla trials and formal lab tests. Both these methods provided interesting findings but were not able to answer our most important questions. Here’s why:

Guerrilla trials

This methodology has been well described by the GDS team and on UX Booth and the most obvious benefits are well known: guerrilla trials are easy to organise, quick and cheap to execute; and teams can run them frequently at any stage of the design and development process. We found guerrilla trials helpful and here’s what worked really well for us:

  • Getting everyone to test: We got everyone in the team, designers, developers, editors and project managers, to go out onto the street and test prototypes with the public. This allowed us to build stronger empathy with potential users and gave us the opportunity to see people’s first reactions to our concepts. Since everyone took part it wasn’t necessary to craft a report which would likely end up gathering dust anyway as everyone was familiar with the findings already.
  • Bouncing the nettles out of the way: Although the cost of involving every member of the project team can be significant, for us it was definitely worth it. For example, taking part in research activities helped our project manager Katie better understand the limitations, obstacles and benefits of user trials. As Katie said, being part of these sessions helped her to “bounce the nettles out of the way” to make sure that the team doesn’t have to deal with unnecessary challenges and to better understand how to plan ahead and make sure that everyone on the team was kept busy.
  • Team bonding: Guerrilla trials acted as a great team bonding activity, we planned them together, we prepared the prototypes together and everyone in the team went out to test the prototypes with public. It was a fun thing to do and helped the team bond outside of the office environment.

The guerrilla trials gave us insight into people’s initial reactions to our prototypes and helped us evaluate assumptions.

There were obstacles too. We found that guerrilla tests are well suited for evaluating very specific research questions, especially if they relate to people’s first reactions and initial impressions (e.g. "What feels more like browsing a digital magazine: scrolling vertically or horizontally?”). However, guerrilla trials can’t provide an answer to multifaceted and more open questions (e.g. “Would this service be used more in the morning, or in the afternoon?”), as people respond to these questions with their opinions, which give only an indication of the behaviour that would occur were they to start using the service in their daily lives.

Guerrilla trials might be fine for testing a feature or investigating people’s first reactions to your concept but the method cannot validate whether people would actually use your proposition or not.

Lab test

As the guerrilla trials didn’t help us to validate whether our concepts were heading in the right direction we decided to complement them with formal task-based thinking-aloud lab tests.

Most of the team members were able to take part, observe the sessions, take notes and organise the observations into findings. Getting everyone to observe and actively take notes was really helpful in getting the team members to understand and to internalise most of the findings of the trials, something that is really difficult to do if the team doesn’t take part in the activity and instead is only given a research report or presentation at the end of the study.

The trials uncovered a large number of interesting findings. Some of them confirmed our existing assumptions and some were rather surprising. People tended to say that they liked the concept and talked about how they would use it and what they thought about it. Most of the feedback was related to usability problems we didn’t get right in the design. It seemed that the concept was spot on and that all we needed to do was to build higher fidelity version of the prototype and fix the usability problems.

This would have been a big mistake though. As with the guerrilla trials, the feedback we collected didn’t reveal much about the behaviour that would occur when people use the prototype in their daily lives and the habits that would or wouldn’t form. If we hadn’t realised this subtle difference between assumed behaviour and actual behaviour we would have focused on fixing usability problems not knowing that we were trying to ‘fix’ a concept that might not be strong enough

How to evaluate concepts?

The guerrilla trials were cheap and easy to conduct but gave us only an indication about users’ first impressions and immediate feelings. The lab trials provided a more thorough set of findings but most of them were usability issues specific to the implementation of the prototype. We concluded that neither of these methods, even if combined, can provide answers to the most important questions that every team developing new innovative products and services should be asking: “Will people use it? Will it fit into their lives? Will they build any habits and routines around our proposition? Should we pivot?”

To resolve this, we had to look for another method that would help us to understand how people use our proposition on a daily basis, in different contexts and we wanted to analyse the different journeys taken within it. Equally important was to use a method that would help the team internalise all research findings through participation. As I described earlier, getting everyone in the team on board with the research findings is often problematic. The research reports or presentations are, in the best case scenario, only skimmed through and a lot of granular and important findings are forgotten. The best approach to this problem we learned so far is to allow all team members to participate in research activities.

Letting people use ‘it’ in their daily lives

After considering several different approaches, the project team conducted a diary study with 20 participants. The participants were provided with a working, high-fidelity mobile prototype and were instructed to use it for two consecutive weeks. We also asked participants to post all of their experiences with the prototype into a private group on Google+. Keeping the ‘diaries’ online allowed the project team to follow participants’ activities as they were reported. This proved to be a really great technique, as more traditional diaries wouldn’t allow us to have such frequent interaction with the participants.

The participants’ comments were split into two categories. The first related to the context of use, such as location (e.g. on the bus), time (e.g. over the lunch break), current activities (e.g. while waiting for someone) and social context (e.g. with a partner). The second category covered the participants’ experiences of the prototype itself (e.g. what journey through the prototype they took, what points of the experience were really important or annoying, how they used specific features, etc.).

Besides gathering qualitative data describing participants’ motivations, felt experience and reflective thoughts, the team implemented metrics that allowed us to quantify participants’ activity with the prototype to effectively triangulate the qualitative and quantitative findings.

To validate participants’ interest in the proposition outside of the artificial settings of the diary study and to evaluate whether people can build habitual use of the prototype after the initial novelty fades away, the project team intentionally kept the prototype live for an additional two weeks after the ‘end’ of the diary study was announced. Thus, the participants did not feel obliged to use the prototype, and they were not incentivised for any further activity other than by the experience provided by the prototype itself. Of course we saw a significant drop in participant’s activity, but crucially, we were able to validate how many participants would continue to use the proposition and how it fit into their daily routines.

Did it work?

As I mentioned earlier, no method is a silver bullet for all research and prototyping needs and a diary study is no different. However, this method helped the team to achieve what we couldn’t with guerrilla and lab trials. Participants lived with our proposition for longer, which allowed us to observe patterns of behaviour and context of use. We were able to learn not only from what people said about our proposition but most importantly what they actually did with it and how they later reflected upon that experience. By leaving the prototype with the participants for two additional weeks after the study ‘ended’ the project team were able to gather relatively reliable data about retention and habit formation. Finally, combining qualitative and quantitative approach to the study allowed us to validate the data we gathered. To conclude, we can only recommend this approach to teams that understand the importance of validating a concept before further development.

If I stopped here you would hear only a half of the story. Alongside the diary study, the team experimented with a new framework, tentatively named Lean Experience Mapping.

Lean Experience Mapping*

As I mentioned above, the diary study participants’ comments posted on Google+ could be roughly divided into two categories: A) context in which they used the prototype and B) how they experienced using the prototype. Based on this we created two maps on large walls in our office: a Contextual Map and an Experience Map. As participants posted their comments, their posts were captured and attached to the respective map. Let me describe this process in detail:

Contextual Map

The Contextual Map recorded the context in which participants used the prototype. The basis for the Contextual Map was a grid of nine columns. The first eight columns split the day into roughly 2-hour segments (early morning, morning commute, late morning, lunchtime, afternoon, evening commute, evening and night / bedtime). The ninth column covered the weekend.

The grid also had three rows – the first to record what participants reported they were “doing”, the second for what they were “thinking” and the third for what they said they were “feeling”.

When a participant posted a comment to the Google+ private group, we captured it on a Post-it note and placed it in the appropriate place on the Contextual Map.

As more Post-it notes accumulated over the first days of the study, the Contextual Map started to resemble a bar chart, or a ‘heat map’ of activity. Without reading each individual Post-it it became very apparent at what times most of the participant’s activity happens. A daily rhythm of usage started to appear and we were able to trace first signs of forming habits. The Contextual Map allow us to display the study data in a very glanceable way that was easy for anyone to understand and interact with.

Experience Map

Opposite to the Contextual Map we created the Experience Map. This map was made of several nodes, each of which represented a specific point in a user journey throughout the prototype. Each individual node had five levels describing the experience users felt at the specific point of the user journey. The levels ranged from poorest to the best experience: not usable, usable, meets expectations, exceeds expectations and delights.

The nodes were ordered to follow the ideal user journey through the prototype as defined by the UX team.

When a study participant posted a comment on Google+, a member of our team captured the comment on a Post-it note and attached it to the Experience Map at the relevant point of the user journey. The level of experience described in each comment was subjectively evaluated and the Post-it was positioned on the node’s vertical axes from best to worst experience.

After only a few days of the study, the Post-its started to accumulate around the nodes of the Experience Map. The clusters visually highlighted how people felt about the experience at each step of the user journey. The nodes with positive responses had Post-its accumulated around the top and those with negative responses had them accumulated around the bottom.

After the first two weeks of the study, both maps were saturated with comments. Both maps not only bore a lot of detailed data about each participant’s granular experiences but immediately gave a glanceable perspective of what points of the experience needed iterating, and in which moments of the day people used our prototype most.

Lean Experience Mapping helped us to:

  • Avoid Death by Presentation: As we identified before, allowing all team members to participate in the research activity helps them to build empathy with the audience and internalise outcomes of the research - something that is quite difficult to achieve through a traditional report or presentation.
  • Save time on data analysis: Mapping the study data on the walls in our office and analysing as we went saved us considerable time over the more traditional analysis at the end of the study. Having the maps also allowed us to probe participants about patterns we saw emerging throughout the study.
  • Make the process more transparent: Two large and highly visible maps in the office triggered a lot of interest and conversations not only in the project team but across the whole office. The maps were a conversation starter and point of interest and allowed the whole UX and research process to become more transparent to everyone in the broader IRFS team.
  • Create a tool, not a deliverable: When filled with data captured in the diary study, The Lean Experience Map provides a glanceable view of how people feel about each specific point of their experience with the proposition. Based on this, the project team can identify parts of the prototype or user journey that needs to be iterated. Later, after implementing the changes, the prototype can be tested again to reveal if the amends improved the experience at that specific point. This process can be run as part of an iterative development cycle and to provide an easy to understand and glanceable view of the sentiment of the user experience.

 

* Traditional Experience Maps, as described by Adaptive Path, are documents visualising journeys users take when experiencing a product or a service. This visualisation often suffers the same fate as other traditional UX deliverables: the Experience Maps are visually appealing but they act as a milestone in a project only to be forgotten about and collect dust. Traditional Experience Maps are deliverables. Our Lean Experience Mapping framework is inspired by Lean UX methodology and is focused more on activities and delivering experiences, than crafting a report or deliverable, hence “Lean” in the name of the framework.