Campfire’s spent the last two weeks in story development, creating scratch demos of our story experience, as well as finalizing our plans for our project deliverables. We conducted 2 playtests in Week 10 using a story demo (recorded with scratch from our classmates). In week 11 we took our feedback from the Playtest Day, and set out to narrow our focus for the semester, as well as revise the story of our main experience. Along with our regular faculty meeting, we had a “table read” meeting where Ralph and Scott gave us feedback on our latest version of the script, as well as a story-centered meeting with Anthony Daniels. Based on those conversations, we made another revised scratch demo that we’ll be using to test in Week 12 before we do our final voice recording.
For our project deliverable, we’ve decided as a group to focus the story on the “open ended” interaction, and provide short demos to showcase the other interactions we’ve developed. What that means for our project delivery is the following:
Navigation to demos (Showcasing our Navigation interaction)
- Main experience demo (Story+ Open-Ended interactions) – More polish than the other demos (voice acting, SFX, music)
- Short demo showcasing Template answer interaction
- Short demo showcasing Pattern-based Voice Recognition
- Short demo showcasing Asynchronous Messaging between Echo devices
In addition to our demo package, we will be documenting our interaction development, design decisions and discoveries in a report that will hope to help future developers in creating new and natural ways for interaction with Smart Assistant devices, as well as design discoveries for audio-only experiences.
Below you’ll find a list of the events with our insights, as well as some of our planned work moving forward.
Small Emotion Playtest (Thursday at HCI Playtest Lab) – 4 testers
- Test used 1 dialog line, read with different inflections / emotions
- Discovered that strong emotional responses prompt more user engagement (ie
, the more rude or outrageous the recording was, the more likely the user would respond)
- Nuanced emotional readings difficult to gauge for listeners
ETC Playtest Day (Saturday at ETC) – 17 testers
- Story length should be 3-5 minutes, not 7-10 minutes (previous)
- Any period longer than 15 seconds of no interaction loses user engagement
- The story topic pertinent
- “Alexa” wake word interrupts skillset, skips through experience
- Open-Ended interaction moment worked for most users
- We also tested the pattern-based voice interaction – users responded positively to a shift in content
- Potential to “learn” a new language (in pattern format) felt fun, and exploration felt fun
- Users didn’t need a “ramping up” of interactions – best to jump right in
- Past experience with the Echo didn’t really affect the results we had
- Recorded with scratch recording from classmates
- Initial structure tested at Playtest Day had combination of passive listening and interaction
- Launched to Echo via editor tool and server
- Further exploration with sound effects, integrating different voice acting as well as other “robotic” voices to sit with Alexa in the experience
Editor Tool Development
- Editor tool revised to include favorites functionality, sound file, intent training and story branching / fail case mapping.
- Able to export directly from tool for live testing
- Some content bugs led to crashing, later corrected in subsequent updates
- Through testing, discovered that 90 second “limitation” has a work around (if you separate distinct sound files form the editor node), but this ended up being a moot point because user attention waned significantly after 15 seconds of passive listening
Asynchronous Messaging Development
- Basic functionality of messaging implemented
- Result: user dictates message, and recipient details. Retrieved message read by Alexa.
- Based on feedback from the playtest day, we shortened the structure to create a 3-5 minute experience, with interactions from the start
- Based on conversations with Anthony Daniels, renewed focus on interactions balanced with performance
- Story read through with Ralph and Scott provided good feedback on tightening up the dialog, structure as well as how to frame the interactions, both open ended and template based
- Further editor tool development, with added functionality in “open-ended” training (query intent), version control download and upload files, bug fixes and UI adjustments and “bubble set” arrays that allow for varied outputs (so for failcases, we can cycle through different reactions).
- Updated asynchronous messaging system to have the ability to store multiple messages from a single user, as well as add sound emojis to the end of your messages.
- Seth reached out to contacts on main campus drama dept, received a connection to voice acting talent from the CMU acting class
- Reaching out to voice actors next week, have 9 potential candidates
- live Q&A for technical discussions and development
- Roy participated, and his notes are here
- One notable discovery: we’ve been having some challenges with Echo shutting off mid-skill. Since we’re able to track that the server wasn’t the source of the issue, the Alexa dev team suggested it might be caused by inconsistent internet connection.
Looking ahead, we’re recording a revised scratch demo for further story testing, and plan to coordinate with voice acting by the end of the week.