Designing Voice BMI

Marcellus Pelcher
3 min readAug 21, 2017

--

“Good design, when it’s done well, becomes invisible. It’s only when it is done poorly we notice it.” — Jared Spool. I will highlight the hidden design elements I implemented in a small program called Voice BMI. Given your height and weight, it gives you your BMI. Now I will go into each design element highlighted in the picture above.

Introduction

Telling the user who you are is a good way to transition into a voice app. It is a mechanism to handle the hand off from a god bot like the Assistant or Alexa to a voice app. Also, it is good manners for your voice app to introduce itself.

Explicit Question

“What is your height and weight?” In testing I found that stating what you can do vs. explicitly asking a question, left a user confused about what he should do next. Having an explicit question helps moves the conversation forward.

Natural Speech

People tend to say things like “I am 5 8, 200 pounds.” The system accepts implicit feet and inches. If the user only says her weight, the system will ask for her height. This also works if the user only says her height. This is a type of error, that is recoverable without having the user say everything again. Notice that the system does not teach the user how to give the input or what units to use. Teaching users how to speak is not natural. We as humans do not say to each other: “If you want me to talk about sports, say sports!” If we want a natural experience for our users, we should not teach how to use the system until the user asks or there is a no match error.

Implicit Confirmation

There are two types of confirmations, explicit and implicit. Explicit confirmation requires you to say “yes” or “no” before you continue. Implicit tells the user what the system is about to do, but continues without prompting to continue. Explicit is good for scenarios where the cost of getting it wrong is high. E.g. booking an airline ticket. Implicit is good for scenarios where the cost of getting it wrong is low. E.g. playing a game.

Speech Mirroring

While testing, I found it jarring to give input in one format and hear it back in another, so I implemented mirroring. If the user says “I am 5 foot 2, 135 pounds,” the system will say “At 5 foot 2 and 135 pounds...”

Implicit Understanding

Sometimes the users may not say exactly what they mean. “Thank you,” implicitly means that the user is done with the conversation. See conversational implicature.

Even in a small program such as Voice BMI, there are plenty of design elements. I foresee that in the near future, there will be large conversational interfaces that companies will spend tens of millions of dollars to create. Psychologist, designers, writers, and engineers will come together to create systems for companies that will adapt to their customers, treat their customers with respect, and always represent their brand in a positive way. These systems will power things like fast food drive-throughs, employee benefits help, and even teaching.

Voice BMI has been approved for Actions on Google, but has not yet been released to the public. There still exist design improvements that can be done to the app. Before doing too much gold plating, it is best to get something out as soon as possible to allow users to help shape the design.

Related Links:

Voice BMI Botsoceity Design Mockup Path 1
Voice BMI Botsoceity Design Mockup Path 2
Design is [Helpful]
Conversation Design: Speaking the Same Language
Can we talk? Voice, accessibility, bots, and building the future of conversational content
In Conversation, There Are No Errors

Note: These are solely my opinions and are not necessarily the opinions of my employer.

Unlisted

--

--