What is design accountable for in a product company?

When I started my first in-house design leadership role, I proudly went to my boss and declared, “the design team is accountable for as-shipped design quality!” She smiled and nodded and moved on to the next topic.

Thomas Sutton
UX Collective

--

“Huh” I thought, “I didn’t think that would be so easy”.

I had similarly unsettling non-responses when I shared this statement with my peers who lead technology, business, or science teams. “That’s nice” they seemed to be thinking “but not very relevant to me”.

Over time, I came to realise that the organisation simply had no clear concept of what design quality meant — and therefore claiming accountability for design quality was like claiming sovereignty over a made-up country.

In this article I’ll explain why that was a problem, how we have addressed it, and offer a general framework for adoption or adaptation by other design teams.

Contents

Why does this matter?

In the absence of a clearly understood purpose and mandate for design, designers will be understood in terms of the tasks they perform or the output they produce — becoming a team that produces workshops, or user research, or UI designs, in response to business requests. “Design quality” will be interpreted as the quality of these deliverables. This, in turn, makes design ineffective, with designers focusing on the quality of their design deliverables, while their organisations continue to deliver bad experiences to end-users.

To address this challenge, I propose a framework for design accountability and design quality, with a causal model connecting these to product outcomes, user benefits, business value, and systemic impact. I use the design accountability and quality definitions of my team at Evinova as an example — with the caveat that they are work-in-progress and context-specific, and may not be directly applicable in other organisations.

My belief is that clarifying these concepts and their relationships can help design leaders make our contribution comprehensible in a business setting. I also hope it provides a platform for more nuanced conversations with our peers in product, technology, and business leadership roles, about our individual and shared areas of responsibility. Ultimately, this should help design teams to have a greater positive impact on the products and services our organisations ship.

What is design for?

This question, and any answer to it, may only make sense with a contextual qualifier:

What is design for [at organisation X]?

The answer needs to be coherent overall organisational purpose:

What is [organisation X] for?

At Evinova, our purpose is to accelerate better health outcomes by providing the life sciences sector with digital products and services that address unsolved challenges in clinical development and healthcare. With this organisational purpose as context, we have defined the purpose of design at Evinova as follows:

Evinova uses human-centred design to create products and services that are valuable for the people who use them.

The key merits of this definition are:

  • If people use human-centred design or “HCD” as a search term they will find robust content our team is happy to be associated with (which is unfortunately not true for “design thinking” or “UX” or “product design”).
  • It clarifies that the object of our work is the company’s products and services — not design deliverables.
  • Valuable for the people who use themestablishes in plain language the big idea behind concepts like “user needs”, “pain and gain points”, or “jobs to be done”, without tying ourselves to any specific tool or approach.

Obviously, human-centred design is just one philosophy of design — there are others that are equally valid — but HCD is the best fit for our team’s work, and this definition is simple enough to understand and remember without requiring a lot of additional explanation.

Defining design quality

The purpose statement above helps us clarify design’s aims at Evinova. But how do we know if we have reached them? What is good design? What are the dimensions of design quality and how should we measure them? One entry point into defining design quality is to rephrase the purpose statement as an equivalency:

Design quality = User value (of our products and services)

But this just creates a new question — what is user value? And more specifically, what kind of user value does design create?

To answer this question, we reflected deeply on what we care about as a team. What do we spend our time fighting for? What kinds of user feedback makes us happy and proud? What keeps us awake at night when we get it wrong? For a statement about user value to resonate, we realised it had to also encapsulate our values as a team.

Through this reflective process, we identified three foundational principles that make sense for us, for the kinds of products we make and the people we serve. We feel that user value depends on solutions that:

  1. are barrier-free;
  2. people like to use;
  3. improve messy reality.

These three principles can be further articulated into 6 dimensions of design quality: usability, accessibility, usefulness, enjoyment, context-fit, and net-simplicity. These dimensions are what we take accountability for in the Evinova design team.

A diagram showing three pairs of overlapping concepts: usability and accessibility, usefulness and enjoyment, net-simplicity and context-fit.
Evinova Design Quality Framework

Some of these dimensions have broadly understood meanings, while others are guiding concepts that we had been implicitly pursuing without explicitly defining them. We have now established the following set of working definitions:

Usability

Our target users can use it — quickly, without errors, with minimum training and support, in real-world conditions.

Accessibility

All our target users can use it — regardless of their physical, sensory or cognitive abilities.

Usefulness

It helps people progress towards their goals.

Enjoyment

People feel good using it.

Context-fit

It works well within the existing technology, workflow, and normative landscape.

Net-Simplicity

It removes more complexity than it adds to peoples lives and jobs.

When we share these dimensions within the design team and with our non-designer peers, they resonate and help people understand what we are trying to achieve. They also spark some good questions, with the most frequently raised being “how do these design quality dimensions connect to our business goals?”

Connecting design quality to business goals

The business objectives that lead to a product or service being funded and prioritised in a for-profit organisation are typically things with direct impact on current or future profitability — like revenue growth, cost avoidance, or productivity. How are these business benefits linked to the human benefits provided by design?

One line of reasoning, championed by firms such as McKinsey and frog under the banner of “business value of design”, argues that design should adopt these business goals directly. In simple terms, good design makes money, bad design doesn’t. This argument has the unfortunate flaw that it doesn’t match with the reality of the digital world around us. Too many successful, market-leading software products are difficult and frustrating to use, and make people miserable. And every designer carries the scars of well-designed products that failed in the market. Clearly additional factors are at play that are outside the designer’s remit, and while design quality certainly contributes to business success, it cannot be directly equated to it.

A second, more nuanced position argues that design should share the same goals as product management. This feels closer to the truth, especially when product management objectives include measures of engagement and satisfaction, in which design clearly has a significant role. However, this leaves designers with no way of understanding or measuring the value of our specific contribution.

I propose a third approach where design quality and product outcomes are connected by a chain of cause and effect. In its simplest form, this chain states:

If the design is good (design quality), then people will use the product (engagement), and the product will work (effectiveness), generating the desired product outcomes.

Causal chain showing design leading to people using the product, and the product working, which reinforce each other and lead to product outcomes.
Causal chain from design to product outcomes

This causal chain helps clarify the different levels of impact that design can have. However, in this form it is overly design-centric — we need to expand the contributors to the causal chain to recognise the role of other disciplines, and extend the chain one step beyond product outcomes to recognise the different forms of impact that products have on the world.

If the design is good, and the sales and marketing is good, and the customer service is good, then people will use the product.

If the design is good, and the science is good, and the technology is good, then the product will work.

If people use the product, and the product works, it will generate the desired product outcomes, delivering business value, user benefit, and systemic impact.

Expanded causal chain in which additional professions — sales and marketing, customer service, science and technology — contribute to the product outcomes together with design.
Expanded causal chain contextualising design as one of many professions contributing to product success

Untangling design and product management

This causal chain positions design with one foot in the world of customer experience (CX), and the other in the world of “making things work”. This sometimes creates confusion between product and design responsibilities, since both designers and product managers need to collaborate with the whole spectrum of disciplines involved in creating products that work and that people use. However, the designer contributes to these outcomes by pursuing design quality (AKA user value), while the product manager needs to orchestrate and prioritise effort across all disciplines in pursuit of overall product outcomes.

The consequence of this view is that every contributing discipline — including design — should be held accountable for discipline-specific quality metrics, and have shared responsibility for product success metrics. While product managers should be held ultimately accountable for product success, and take a holistic view across the spectrum of activities — from sales and marketing to customer support through design to science and technology — that contribute to that success.

What about society and environment?

As a product company, we include societal and environmental impacts in our definitions of product success. This is referenced in the causal model through the concept of “systemic impact” beside user benefit and business value. For example, the products we are building at Evinova aim to deliver systemic impacts including reducing the carbon footprint of drug development, making clinical research more inclusive and equitable, and accelerating the development of new medicines. These goals are baked into our organisational purpose, the products we have chosen to build, and the way we define success of those products.

A different approach might be needed for design teams working in challenge arenas 3 and 4, where the type of intervention required is undefined and may end up not being a product or service at all, but perhaps a policy, a piece of legislation, or an organisation. This kind of work is outside the scope of this article.

Measurement Strategies

I don’t fully agree with the maxim “you can’t improve what you don’t measure”. Artisans, cooks, artists, writers, musicians and tradespeople spend their lives honing and improving the quality of their work, without requiring objective validation. However, it is often true that “businesses won’t spend money on improving things they can’t measure”, which for many designers means they can only improve design quality by working nights and weekends! So, for everyone’s sanity and quality of life, we need ways of measuring our design quality dimensions. We currently have qualitative tools that give us insights across all dimensions, but can only quantify a subset. This is very much a work in progress, so I welcome feedback and pointers to measurement tools and approaches that could help.

Level 1: Directly measuring design quality

Usability

Our target users can use it — quickly, without errors, with minimum training and support, in real-world conditions.

Measurement methods: heuristic assessment, usability testing, SUS/UMUX usability questionnaires (deployed both in user testing and real-world use)

Target: 80% agree/strong agree (top 2 box method) with Q2 of the UMUX-lite questionnaire ([This system] is easy to use).

Accessibility

All our target users can use it — regardless of their physical, sensory or cognitive abilities

Measurement methods: accessibility checklists, accessibility audits, user testing with diverse panels

Target: zero critical defects against WCAG 2.1 AA using Deque scoring method. Continuous improvement through recurring audit and remediation planning.

Usefulness

It helps people progress towards their goals

Measurement methods: user interviews, SUS/UMUX usability questionnaires (deployed in user testing and real-world use)

Target: 80% agree/strong agree (top 2 box method) with Q1 of the UMUX-lite questionnaire, alternate version ([This system] does what I need it to do)

Enjoyment

People feel good using it

Measurement methods: user interviews, contextual enquiry

Target: TBD

Additional notes: Measuring enjoyment in product use is an active area of research and a number of scales have been proposed, but we haven’t found any short, widely used validated questionnaire. Potential routes to explore include designing a simple 1-question measure to accompany the UMUX-lite questions, or using automated sentiment analysis on interview transcripts.

Context-fit

It works well within the existing technology, workflow, and normative landscape

Measurement methods: contextual enquiry, shadowing, user interviews

Target: TBD

Additional notes: Contextual enquiry often reveals context-fit issues which are otherwise hard to detect. User workarounds and patches, misalignments between intended and actual use, and blockers that prevent or limit use, can often be observed in the field. One possible approach to quantify this dimension would be to grade these from minor to critical, in a similar approach to that used for technical bugs or accessibility defects.

Information exchange and interoperability are specific types of context-fit which can be technically specified when domain-level standards (such as FHIR) or dominant platform players (such as the EPIC EMR) exist. However, technical interoperability does not guarantee context-fit, nor is it always a necessary condition for it.

Net-Simplicity

It removes more complexity than it adds to peoples lives and jobs

Measurement methods: contextual enquiry, shadowing, user interviews

Target: TBD

Additional notes: One possible approach to measuring this dimension could be to define a set of “net simplicity heuristics”. These could include aspects such as transferability of existing user knowledge; hand-offs between systems in cross-product workflows; avoidance of double data entry; avoidance of multiple sources of truth for the same information; and reduction in total number of systems used.

Level 2: are people using it and are they happy with it?

These metrics can tell us something about design quality, if contextualised with other sources of insight.

Customer Satisfaction (CSAT) — this is closely linked to design quality, especially if it is used for distinct aspects of the customer experience, e.g. product, customer support, and training. However, technical quality can have a huge impact on this, so if your product has performance or reliability issues this may drown out any signal about design.

Monthly or Daily Active Users (MAU, DAU) — generally speaking, rapid growth in user base is a sign of good product-market fit, of which design is an important component — but other aspects such as pricing and distribution may have greater positive or negative impact.

User Retention — user retention or churn metrics, with appropriately tailored definitions of what constitutes an “active” or “inactive” user, are closely related to design quality. If users find value in your product, they will keep using it. However, this has similar caveats to CSAT around technical quality issues.

Task completion/abandonment — one of my favourite product metrics — do users achieve the tasks for which we designed the product? Also one of the hardest measurements to define, as it requires both a clear definition of what the product is for, and a method of measuring if that has occurred.

Duration of sessions (measured in time or number of tasks) — this metric is a double-edged sword — sometimes the best design enables users to quickly find what they need and move on, so a “bounce” may actually be a happy user. However, for some products where the benefits are obtained through sustained use, longer sessions are better.

Customer support logs — an absolute goldmine for identifying and quantifying pain-points that the product is creating, some of which will be design related.

The ideal approach is to view these metrics in combination with direct measurements of design quality, and with contextual information about what else may be contributing (e.g. technical issues, customer relationship issues). However, I argue that design should never take sole accountability for these product-level metrics and they should never be used on their own as measures of design quality.

Building metrics into the business

We’re using these measurement strategies in two ways within Evinova. Firstly to contribute to the overall scorecard and OKRs for the business. Secondly, within the product teams, by rolling out some of the well-defined measures (like UMUX-lite, CSAT, and accessibility audits) across all our products to create a consistent baseline of measurement. We also intend to explore and experiment with metrics for some of the harder-to-measure concepts like enjoyment, context-fit and net-simplicity. I’d love to hear from the community if you know of measurement approaches that we might be able to apply.

Summary

For design to be understood in a business context, its purpose needs to be defined in a way that is distinct from, and compatible with, the purpose of other professions. At Evinova we have defined our purpose with the statement:

Evinova uses human-centred design to create products and services that are valuable for the people who use them.

Design also needs quality criteria. Based on our purpose statement, we frame design quality in terms of user value:

Design quality = User value (of our products and services)

This means we take accountability for designing solutions that:

1. are barrier-free (usability and accessibility);

2. people like to use (usefulness and enjoyment);

3. improve messy reality (context-fit and net-simplicity).

We have qualitative methods to assess all of these dimensions, and are working on establishing more systematic quantitative measurement as part of our overall OKR framework.

Finally, we recognise that we are contributors within a larger, multi-disciplinary system, and share responsibility with our colleagues in other disciplines for making products that work, that people use, and that deliver outcomes. Ultimately our collective goals are user benefit, business value, and positive systemic impact.

Ask the hard questions

I hope this essay provokes in-house designers to move on from “ROI of design” rhetoric and ask themselves some hard questions:

  • What is design for [in my organisation]?
  • What is good design?
  • How can we measure it?
  • What does that mean for the way we organise and incentivise design in our business?

I’m sure your answers will be different from mine; and I’m equally sure that examining these questions will help you better understand and position the value of design in your business.

--

--