Smart Assistant / Conversational User Interface / Interaction Design / Motion
Currently, people mostly use voice assistant when they have specific commands. Even though voice itself is very humane and organic channel of communication, interactions with voice assistant are very mechanical, unnatural and task-oriented. Our team thought that users could be more motivated to use our smart assistants if these smart assistants are more responsive and personalized to our needs and interests. Also, these personalized assistant can enrich our experiences with voice assistant far beyond.
Project Melo is a personal assistant that learns about you through your conversation with it. More things you share with your assistant, more it can do for you.
First impression matters. Project Melo interacts with you in organic and natural way, similar to how people get to know someone for the first time. First, there is a short introduction about assistant. Then, the assistant will ask for your name just like people do when they meet someone for the first time
Based on how you interact with your assistant, it looks, talks and behaves differently. Assistant develops its personality through the interactions with you.
You can initialize your assistant’s name and appearance through your words. Then, assistant created as visual character responds you both visually and vocally, as people communicate through non-verbal cue as well as actual words to deliver what they mean. Over the time, assistant learns about you based on how you talk in conversation. Your assistant adapts itself reflecting what it knows about you. This personalization will encourage attachment to your assistant.
More things you share more things your assistant can do for you. When people know each other better, they also can give better suggestions for you. According to what your assistant knows about you, it gives you contextualized recommendations that is helpful to your situation. For instance, your personal assistant can analyze your photo album or social media and suggests what you can wear in rainy day instead of just general weather information like temperature with a number.
Your personal assistant can catch your condition and emotional states through the way you talk — nuance, tone, connotation. It analyzes past conversation and your voice. Also, you can manually edit assistant generated text and use button on the down-right “I don’t want to talk about it” when you feel uncomfortable to talk about certain issues.
In order to understand how current voice assistants are used, we individually used voice assistants (Google Assistant and Siri) for a week and logged our experience in a diary.
We found when we are motivated to use voice assistant and discovered that our interaction with the voice assistant feels very mechanical and has limited opportunities for personalization.
We conducted three rounds of conversations with strangers to investigate how people talk to each other when they meet for the first time. Then, we had a post-conversation interview to gain insight on when people felt comfortable or awkward and paid particular attention to the colloquial techniques we often overlook that current voice assistants lack.
Using our drafted script we conducted a role play through the phone as if one was a smart assistant and the other person was a user trying the smart assistant for the first time. We wanted to learn whether our initial scenario effectively conveyed the personalization process and analyze user responses to the proposed assistant experiences.
Testing high fidelity prototypes
Using the wizard of Oz technique with our high fidelity prototype, we conducted three rounds of testing to evaluate the success of our design.
The goal of user testing the initialization process was to verify whether the solution increases interest and motivation for first-time users. To measure the outcome, we instructed each participant to rate features from 1 - 5 after going through the entire initialization process with Think Aloud method.
“I loved how it has a face. There is something I can talk to now”
“Being able to name the assistant is definitely a personal touch”
“I started to get into the conversation because there was enough hint to understand that it was responsive”
“Calling the name several times… felt like brining this thing into existence”
→ Personalization process effectively triggered user’s attachment to the assistant
→ Interacting vocally and visually felt more conversational and users started to engage more actively
What could be better
“By the end I kind of forgot what the point of this process was. It was clear that I could personalize it but I wasn’t sure what it can do”
“At the end of the conversation I would start interacting more to explore the options it suggested”
→ Learning what the assistant can do didn’t get across as much. If we had another round, we would add more steps for the user to try out different features of the assistant after the personalization phase