Weeks 1& 2: 1/17-1/28: Project Campfire!

In weeks 1 and 2, the project, formally known as “Bard,” renamed itself Campfire, and jumped into building both a technological foundation for our planned interactions as well as exploration into audio focused interactions.  Roy and Phan focused on the technology development, and Seth worked with Sarabeth on jumping into design.

Roy and Phan brainstorm interaction foundation

On the technology side, Roy and Phan started by putting together a high-level architecture on how the AI would accept user input, and output a response.  This involves taking in an input, and finding keywords, and finding intent from it in an initial encoding phase.  A hidden phase would next take these values from an encoding phase, and generate answer keywords and intent from the story to pass on to a decoding phase.  The decoding phase would initiate a query-based response system and dynamically generated responses on a neural network, to test the effectiveness of both.

In addition, Roy tested out various machine learning libraries and APIs: Theano, TensorFlow, Keras and Chainer.  He and Phan decided to move ahead with TensorFlow + Keras.  Roy also spent time this week researching papers and learning more about the capabilities of RNNs and LSTMs, for developing language models, testing out a simple LSTM in generating text based on keyword inputs.

Prototyping out necessary information for the first playtest

For the design side, Phan developed a “Wizard of Oz” Alexa setup for Sarabeth and Seth to test initial designs, in order to quickly prototype interaction cases.  Seth and Sarabeth spent the first week investigating story types, examples and scenarios, e.g. short stories, poems, fables, news, politics and sports.  They settled on starting with a recap interaction of the Pittsburgh Steelers AFC Championship game against the New England Patriots, which tragically (for Steelers nation) ended in defeat.  Their strategy for week 2 involved creating a library of information to pull from during a test with a few football fans in the ETC building.  The goal of this first playtest was to see how open ended questions, option based questions, prompts, responses, suggestions, trivia and fail cases affect the flow of conversation.  Though we ultimately want to develop an experience using a linear story, we started with a sports topic to start identifying what common threads we see in audio interactions.  

With our first playtest (Thank you Cody (Pats fan), Mike (Bills fan) and Erika (not a fan)) we have several takeaways:

  • Users expect Alexa to know and readily provide stats or fact heavy questions
    • For this interaction, we need to have a way to lengthen the conversation past the transaction
  • Surprising moments that play off Alexa’s dry tone, or the contradiction of unexpected fact retrieval give the most rewarding moments of humor
      • “Tom Brady is the GOAT” – Alexa retrieves a commentator’s opinion, and delivers with same tone as a “fact”
      • Question about the Steeler’s hotel fire alarm incident, Alexa responds with a quote from the perpetrator “I got drunk and did something stupid.” (not immediately clear at start of the sentence – surprise at the connection at the end
      • Below are some clips from our playtest:


  • Canned jokes are not as effective as above examples because they are not based on humor from the interaction itself
  • Silences are awkward, esp in a group, maybe not a problem solo
  • Clips are too long, need to be shorter (max 15-20 sec)
  • Context of the setup creates a certain mood
  • Connection based on a “shared understanding” – what “baggage” does the user bring to the interaction, and how might this affect the experience?
    • E.G. Venting session vs Celebration of the game
  • Found that questions like “who were you rooting for” allowed Alexa the opportunity to join a side- creates a connection to the user, and allows for more curated conversation
    • This goes into “branching” territory

After going through our notes from the playtest, Phan noted that we should focus on how to reduce the complexity and vastness of answer, by creating an environment where Alexa may teach the guest how to better interact with her, integrated into the main experince.  From here, we would expand the experience (because our goal is to have the interaction with users be as natural as possible).

  • For our purposes begin by teaching computer-friendly keywords to identify tasks and simplify / specify interactions.
  • Once we understand the process, begin nuancing the language to be more human-friendly.

In addition to the playtest, Seth spent time investigating other story-based Alexa “skills” currently available (Ear Play, Wayne Investigations, The Magic Door).  He also started researching what kinds of audio we might integrate, in particular thinking about the kind of delay that will exist between the guest and Alexa interaction – what audio can gracefully fill this space?

For project production needs, Campfire began developing branding concepts for next week’s branding walkarounds, as well as team processes (core hours, Scrum, stretch goals).  Sarabeth attended the Playtesting for Explore workshop earlier this past week, and shared with her team the new project requirements for 1/4s, which included brainstorming techniques, composition box, and strategies for initial testing.  Campfire worked to fill out our project Metrics Matrix, keeping in mind our project goals of design discovery as well as technical prototyping.  

Looking ahead this week, the team plans to define our next prototyping goals (both design and technical) as well as complete our branding materials.