Week 5

Week 5 was an exciting and eventful week for Team SweetTalk: two prototypes, Quarters presentations, a decision on our direction, bananas, apples, and more!

During the first half of the week, we wrapped up our first formal sprint (for those unfamiliar with Agile, this just means a chunk of time—in this case a week—dedicated to reaching a certain milestone) with the testing of two prototypes that would both help us learn more about guest behaviors in our experience and showcase our progress for Quarters.  As a reminder from last week, our experience breaks down into three beats:

  1. Meet a character you don’t know and with whom you can’t fully communicate.
  2. Teach the character how to communicate with you.
  3. Use that relationship to achieve something together that you could not do on your own.

Digital Prototype: Object Description

Our digital prototype focused primarily on beats 1 & 2 but did have a simple goal at the end.  Using Windows Voice Assistant as a stand-in for Alexa, we created a demo that enabled guests to teach our AI character object descriptors and then ask her to go to go get that object.  Then we added a surprise moment at the end when the girl was in danger—we wanted to see if guests would go with their gut reactions and yell “stop!” or if they’d be confused.  For this first, very basic demo, we had about a 50/50 response either way, which was great given the fact that it was totally unclear what to do!

Take a look at our professor’s playthrough of the demo:

Paper Prototype: Collaboration

Our paper prototype used navigation through a maze as a proxy for an asymmetrical cooperation in which our guest had a map of the maze our character was in and needed to get information from them to understand where they were and in which direction they should move.  Here’s the map:

Our first round featured me (Andrew) as the character, pretending to not understand English.  In the first step, the guest was asked to teach everything they thought I’d need to know in order to get through the maze successfully, which involved both commands and descriptors. Our playtester taught me the following words:

  • Andrew
  • Dave (his name)
  • Blue
  • Green
  • Red
  • Yellow
  • Left
  • Attempted to teach “show me” but couldn’t figure it out
  • Turn
  • Right
  • Circle
  • Good (as in praise)
  • Stay
  • Walk
  • Get

Unfortunately, I died a few times because of poor playtest design (it was hard to get me to stop in front of a door), but we still left with a lot of valuable information:

  1. We thought this would take 15 minutes. It took about 50.  Turns out teaching all this basic stuff took quite a while!  This was mostly because, even though we stipulated that I had a perfect memory and was a fully verbal adult who just didn’t speak English, Dave’s instinct was to reinforce his lessons and test me at various points along the way.
  2. He spent a while discussing with us what words he should use to get through different situations and struggled with some harder concepts like “avoid” or “go around.”
  3. His approach also took a caveman/pet-like direction with both generally single-word interactions and a desire to provide consistent positive reinforcement.

Knowing the flaws in our design, our second iteration introduced stops halfway through corridors and eliminated the use of English!  Charlie had the inspired idea that perhaps we should try teaching me in a different language, so I then learned how to navigate the maze in Japanese from our next tester.

 

  1. Her approach followed a similar pattern of reinforcement and testing, but she chose to speak to me in full sentences. I’d never pick up on most of the words, but I actually did figure some out!  When asked why she did that, she said it was “because I wanted to be nice and be natural.”
  2. There was a big moment for her when she realized she could use the room around her to teach—something that bodes well for our use of VR in teaching. Initially, the tester tried to teach me forward by moving a triangle towards me repeatedly on the table.  But once she realized she could walk, she started acting out and picking up things to make the experience easier to understand and more dynamic.
  3. She taught me the following:
    • Walk
    • Left
    • Right
    • Stop
    • Door
    • Open
    • Blue
    • Red
    • Green
    • Yes
    • No

 

Quarters

We showed off all our work at Quarters, but the star of the show was definitely our digital demo.  Guests were really impressed overall, saying that we already had what seemed like a complete yet unpolished experience, and felt that we had tapped into something universal yet novel.  Of course, because it was a demo, we got a lot of valuable feedback about what directions our players would like to see our work go in, namely:

  1. Establish a backstory or relationship so that guests know what do to. This could be done through environmental clues.
  2. Have the character speak first.
  3. Give players the ability to ask more questions to the character
    • Because she’s human, people expected to talk to her more and ask questions about what she can do and who she is.
    • Players wanted to see more feedback from her body.

 

You can see a full walkthrough of our Quarters setup here:

 

 

Idea Moving Forward:

Given what we’ve explored to date, we’ve given ourselves a few constraints to design around:

  1. We don’t want to do a branched narrative. Too many dissatisfying moments.
  2. We don’t want to have the expectation for full conversation, aka our character shouldn’t be a fully verbal human, at least in our language. This is because we can’t deliver upon this promise given our scope, technology, and team structure.
  3. Interacting with a little girl provoked strong emotions that we really liked.
  4. It looks likely that we’ll have to design around Alexa’s relatively robotic voice, or at least something similar, in order to provide the freedom in teaching that we want. Voice over may be impossible given that we don’t want to restrict our dictionary.  We want our guest to be able to call an apple an artichoke, call red periwinkle and say 2+2=5.  #alternativefacts

We went through a quick round of brainstorming and have decided to go in the direction of creating a malfunctioning, robotic little girl that we may have created and need to help.  Not too many details at this point, but we’ll be exploring this space over the next week.

Until then,

Team Sweet Talk

Comments are closed.