I'm a VR gamer 🙆🏻 🎮 🥽 and my top three favorite VR games are
Beat Saber⚔️ , Superhot🔴 and Synth Riders🎢 .

Toyota YUI Project

How we create a beloved car for the future
Project Overview
Toyota’s Yui project is a futuristic riding experience that shows the world how Toyota is imagining the future of advanced mobility and making “Mobility for All” come true.
‍
The whole project consists of two major parts, Yui and LQ. Yui is an onboard artificial intelligence agent paired with LQ, a concept vehicle that combines L4 autonomous driving capability with many other technologies. Both Yui and LQ are designed under the principle of “learn, grow, love” to build an emotional bond between car and user. 
My Contribution
—    Helped the UX team to define the UX requirements and multimodal development for the voice agent
—    Worked closely with content writers to translate scenarios into user flows
—    Partnered with developers to implement codes and develop voice prototypes
—    Led one of the demos development, QA testing and usability testing
Goals
Develop rounded voice prototypes and deliver fully-functional working demos before April 2020.

*The project was originally planned for the 2020 Tokyo Olympics, Japanese Automobile Manufacturers Association (JAMA) and the 2022 Beijing Winter Olympics if possible. It has been delayed due to the safety concerns. My UX deliverables are not open to the public yet.

Timeline
Dec 2019 - May 2020
My Roles
Interaction Designer, Voice Prototyper, Localization QA Tester
Tools
Miro, IntelliJ, Git Repo, Dialogflow, G suite
Team
Members: Project Managers, UX researcher, Interaction Designers, GUI Designer, AR Designer, Sound Designer, UX writers, Developers, QA, Software Engineers and many other contributors (who left before I join the team)
Yui GUI Display
Background Info
Our project is derived from an idea, in which drivers can expand their experiences and ranges of activity with new technologies such that they call them the new generation's "beloved cars."
‍
— Daisuke Ido, LQ Development Leader
Why Yui

First unveiled at the 2017 CES, LQ is the next generation of TMC’s Concept-i vehicle. It’s equipped with a Toyota-developed SAE Level 4 equivalent automated-driving function, an AVP System, an AR HUD System, and Advanced Seating with awaking and relaxing functions. More importantly, to create an intuitive and personalized interaction between the car and the user, a voice assistant (VUI) was developed to support the human driver. ‍

High-level Design Principle

‍Yui means “結” in Kanji, to link two things together. As a powerful voice-based in-car AI agent, Yui is designed to learn from the rider through consistent human-computer interaction and deliver a personalized mobility experience. The design idea is that we want it learn from the unique traits of each person and utilize that personal data to provide information reflecting individual needs. The interaction is primarily voice-forward given the unique user scenario in a car where minimum distraction is necessary for safety concern. At the meantime, other modalities such as interior LED light, floral fragrance, seat massage function and a visual display were also implemented along the development.
‍
More importantly, we decided that the image of Yui should be more like a friend, rather than a robot. 

Other Considerations

The official announcement of the demonstration was released by Toyota in October 2019 with the name "Toyota Yui Project Tours 2020". The time, place and logistic arrangement had been decided before the announcement and formed the following UX development.

Route
The tour will start from Odaiba Mega Web and follow a pre-set route coordinated with the transportation department to meet the safety requirements. As a high-tech entertainment hub, Odaiba area is filled with sightseeing spots and shopping centers that provide plenty of stories for the UX team when designing the GPS-triggered conversation. 

Date & Deadline
‍The public test-drive event was originally scheduled to run from June to September 2020 around open roads in Odaiba, Tokyo. The UX development deadline was April at that time.

Schedule & Logistics
‍
The public would have a chance to interact with both Yui and LQ as well as experience autonomous driving in a designated area. The tour will start when LQ is connecting with a dedicated app called “My Yui” which has each participant’s personal information. Additionally, there are also several Toyota representatives to coordinate the event and a safety driver to accompany guests in the car along the road. 

Route Map
Multimodal Interactions

Multimodal interactions are not a new idea as most of the people get used to the combination of a visual modality and a voice modality. The in-car infotainment system is one of the most common applications of multimodal interactions design. It makes use of several input and/or output modalities, including visual interaction, audio interaction and haptic interaction.

We synergize and capitalize on the strength of different modalities to create an intuitive and holistic car experience. It’s an immersive experience filled with things people can see, hear and feel. In that case, it requires an enormous amount of UX effort to make it happen altogether in a short period of time. The UX proportion aims to enrich the car experience and ensure driving safety without being overwhelming. The multimodality interactions are deployed in a diverse format of medium.

In-car Modality

‍Sight
• Yui display: It visually shows different emotions or states of Yui, like being imaginative and eager or talking and awaiting.
‍
• In-vehicle lights: Embedded lighting strips on the roof and floor area use dynamic illumination to achieve human-machine interactions, including the light color change of manual and autonomous driving mode. The footwells will also light up to better indicate the passenger that Yui is talking to.

‍Hearing
• Yui speech: Yui will lead the conversation with the passengers and provide personally interesting information based on human input.
‍
• Music playlist: In collaboration with the digital music service provider, pre-set persona-based music playlists will be triggered across scenarios.
‍
• Sound effect: The environment-based sound effect serves as ambient noise to fill in the gap between speeches.

‍Smell
• Fragrance emitting: fragrance emitting systems will release relaxing and refreshing scents when passengers feel tired or worried. 

‍Touch 
• In-seat massage: the seating function is designed to provide a natural and gentle back massage to either keep the passengers awake or alert.

Conversation Design
Scenario & Persona

Designing a 30 mins car ride is much like designing a show that has a beginning, middle and finale. Starting from onboarding, Yui will guide passengers to experience the functionality of the car, introduce landmarks along the road and lead the conversation until the end. Scenarios have been updated for several rounds in terms of the order and content by the UX team, but the structure remains the same. In addition, to create a more personalized voice-based experience, we also come up with three types of persona based on different user input. Each time when users need to make a choice or give an answer, their feedback will lead them to a path that offers persona-based content. The complexity of scenarios and persona requires a lot of effort to clear things up with both writers and developers. And this is where UX Designers come into play.

Flowchart Development

Partnered with creative writers and other UX Designers, I translated part of the stories into flowcharts to make sure the logic behind the conversation is complete for further development. This kind of visualized documentation also helped me a lot when communicating with the whole team. When I found an incomplete path or a wrong condition, a flowchart was always a good start point and a reference. Below is a basic flowchart with a legend that I created as an example. We also used flowcharts for code implementation and QA testing later on. 

Scenario Flowchart (Blurred for NDA)
Chinese Demo Localization

When designing voice-forward systems, how to make Yui talk in a natural human way is our priority.  As research shows female voices can increase acceptance and trust, the Text-to-Speech (TTS) uses a young woman’s voice across all three languages. Some high-level rules are set to create a friendly and approachable image of the voice agent. The English content is created by the UX writers first. Then the Japanese and Chinese scripts are developed based on English. We have vendors taking care of the basic translation. I create a language-specific guideline for the translators to understand the overall context, Chinese linguistic details and technology limitations of TTS.

When it comes to localization and speech tuning, rephrasing and polishing the scripts takes most of the time. There’re some reasons behind this. One is that without a certain background, the quality of the translation only meets the basic requirements. The other is due to the ephemeral nature of oral speech, the listeners can’t consume long sentences that carry too many messages. Contents that are exceeding the capacity of short-term memories will cause information overload and make people upset. Additionally, letting the TTS read scripts and listen to it is very different from yourself reading and listening to it. So the typical workflow would be like utilizing the Speech Synthesis Markup Language (SSML) and throwing marked contents into the tuning tool piece by piece. Then playing each sentence several times to either adjust SSML tags, such as adding <break> or restructure the whole sentence, such as breaking down big chunks into short ones.</break> 

QA Testing & Evaluation

Because of the limited resource in the project, I also played a QA role and did the Chinese scenario testing. Running scenarios one by one or altogether is another story than just listening to each sentence. Besides, this whole Yui tour is led by a voice agent feeding information along the way. And considering the linear and ephemeral nature of the conversation, salient pieces of information should be given upfront and repeated later if necessary.  So the balance between the delivery of messages and the flow of conversation is another key point for building a natural VUI. 

What I've Learned

A few reflections on my working experience with the project:
‍
Ask as many questions as I can
Don’t be afraid to ask questions about things that you don’t know. It’s not you’re junior or fresh but it’s almost impossible to have someone knowing everything in the project. The best practice for me is identifying the right person and getting help from them. With the “get things done” kind of mindset, most people are willing to help you out. 
‍
Learn how your coworkers work
Knowing the work style of others and interacting with them in different ways make collaboration more efficient and quick. Someone would get to the point with just a few words while others may prefer a doc with details. I was learning by watching people at work during the first few months when the office wasn’t locked down. Then the situation changed rapidly and everything was online. 
‍
Take initiatives and ownership 
Sometimes, the responsibilities are not exclusively defined and someone needs to step out of the blurred line and do things. For example, there was no designated QA role either from TRI or TEMC side for the Chinese demo, so I took the responsibilities to do test runs.

TOP