Week 3 – Critique, Rap-Proto and Outreach

[Production]

After testing with hardware and brainstorming on experience design ideas from last week, we organized our ideas and capability more into structures.

We presented our design proposals to clients, got feedback and sorted out our next pathway for improvement.

Meanwhile, in order to get real perspective from prospective users, we did a user interview with one of our clients’ teaching assistant, who thought VR might be an innovative way for her and her peers to train themselves. We had her play on selected prior products; while she was pleasant with the virtual environment, she expected to have more interactivity and feedback throughout the process.

Next week we will continue analyzing student presentation examples and sort out the algorithm for providing final reflection to the guest. On the other hand, the team will explore on real-time interaction with virtual audience and the effectiveness of user avatar.

 

[Tech]

For this week, we did:

  1. Gold spike for potential devices
  2. Design a system architecture
  3. Implement a rough version of event manager system

<prototype video>

Above is the prototype that we have implemented so far as a gold spike. The prototype has three features: voice recognition, head track, and body movement. For each feature, we used Watson API, Oculus headset, and Kinect.

We also designed a modular system architecture where all the components are independent and allows add more components later on for the future implementation.

Our focus is making the system flexible and modular so it could be extended in the future research work. We decided to create an event manager system in order to make features independent. It is also helpful for implementing real-time feedback.

One of our client suggestions is creating different modes before the actual experience, so we added a controller that allows a player to select a mode (e.g. practice mode and challenge mode) and loads the matching scene or changes the modules/ multipliers.

So far, we have implemented a basic structure of the event manager system and able to store dictation data locally as a text format. For the next week, we are planning to implement the entire system pipeline with voice input module.

Meanwhile, two of our teammates, who are pursuing to be rapid prototypers, are working on their round 0 and planning to present their output this coming Tuesday.