machinehumandesign

UX Design for Voice Interaction

Daimler AG announced the launch of it's new MBUX system at CES 2018. With all the different ways of interacting with in-car infotainment systems, the advancement of speech technology has helped make the new MBUX much easier to use. I spear-headed the design and conceptualization of the new voice interface for MBUX.

The following summarizes a 2-year project. There is a lot more to this whole story than what I put here, but to cover everything in detail would keep you here forever!!

The Customer

I started by asking for any information about the customer, but unfortunately no one had any research about who would be using the system. People didn't even know which car lines the new system would go into, so there was very little formative information to start on the right foot. Luckily I was able to dig up some research from an older project about S-Class customers (like the man seen above) that I could use to form some basic assumptions. It's not the ideal way to start, but it's better than nothing.

Part of the reason why this is bad is because the people who purchase and drive the S-Class are usually at the upper end of the "premium luxury" scale and not typically representative of the more common type of customer. So it is not a fair assumption to make that the particular preferences and tendencies of this market segment are applicable across all car lines.

On the positive side however is the idea that a lot of fundamental human behavior does not change, and we can use certain learnings from that research to help build a testable thesis for how to proceed.

Problem Discovery

There was only one thing my manager asked me to do:

Create the next-generation voice interface for Mercedes-Benz.

While perfectly fine as a request, when projects are vaguely defined like this, it means that the design work begins with problem framing and problem discovery before digging into solutions.

To understand the problem better, I studied different voice interfaces, including previous infotainment systems in older model Mercedes cars. Older generation systems used a text-based interface like in the above image. The core problem with this is that it imposed on the user to read the on-screen text while driving. Reading while driving can be a distracting cognitive task, which is a potential safety issue!

The Solution

The core concept was to reduce cognitive load by making a UI that relies more on recognition, rather than reading and rote memorization.

Minimize the user's memory load by making elements, actions, and options visible. The user should not have to remember information from one part of the interface to another. Information required to use the design (e.g. field labels or menu items) should be visible or easily retrievable when needed.

The remaining design tasks focused on creating archetypal interfaces for specific speech domains and use cases to establish a systemic UI concept for voice interaction. Starting with wireframes and iterating with prototypes, I conceived of specific UI views that had useful variations depending on what was needed to support the voice request but with the goal of creating a whole that is fundamentally greater than just the sum of its constituent parts.

Listening States Visualized

Older systems relied only on audible tones to convey when the system was ready for interaction. In testing, we found that people could miss hearing the sounds depending on the situation. So making these state changes more visible and obvious in the new system was a big benefit.

Interface Archetypes

Instead of creating every permutation for every UI instance needed for production, I designed a series of archetypal screens that demonstrated various pieces of functionality. These archetypes were used to build the core functionality of every speech domain and then I worked with developers to iterate on specific details.

The Outcome

There was a lot more to this project than what I put here. I created many UI flows and documented the entire design system for the new voice interface. I worked closely with our engineers who built the widgets and components to ensure they had the proper functionality. Then I collaborated with a variety of different Tier 1 suppliers who built all the screens and application flows for the different voice domains.

As part of its compliance with GDPR privacy laws, Daimler AG does not really collect analytics about how people use the in-car infotainment system. This made it very difficult in learning how people used the system, but we did conduct many usability tests during the design phase of the project that indicated strong improvements and significant benefits to the user.

I also gave a talk about this project where you can learn a little more about how all of it went!

Other Projects in Interaction Design