Week 4 – Design, Prototyping, Branding and Documentation

[Production]

At the beginning of this week, we discussed the product user stories together and created a chart to list out all the backlog stories. Then based on our capability and scope, we set out priorities and decided the scrum tasks for this week to work on for each team member.

Tuesday team went to Facebook Social VR event and talked to Dan in regards of avatar aesthetic and functionality in VR interactive scenarios. By comparing the pros and cons of each probable hardware and plug-in (Kinect, Leap Motion, Mixcast, ZED Cam), as well as the assessment of input-output data, we chose and ditch particular ones.

Then each member was working on distinctive tasks, in general

Yein was working on implementing a database, prioritized features such as uploading slides and testing on eye gaze/avatar motion rigging;

Magian was working on completing round 0 practice for learning unity interface and helping prototype implementation;

Makar was working on branding materials design, hosting team, client and advisor meetings, scheduling and documentation;

Rocky was working on research virtual character solutions and developing character generation pipeline;

Howard was doing gesture recognition research and testing, also brainstorming on virtual environment interaction.

 

[Tech]

prototype video

Over this week, we implemented the following features:

  • Pdf renderer

People should be able to practice their presentation with their slides. We implemented a function that reads the user-uploaded pdf file, converts each page of the pdf to the png file, compiles the png files to 2d textures, and applies the textures to the material of the screen. The user can switch between pages by using arrow keys.

  • Video recorder from the audience perspective

One of the results that we want to deliver after the experience is a video that is recorded from the audience perspective. We implemented this function by using Rock VR video. It starts recording when a user presses the start button and it automatically stops recording and saves the video as an mp4 format when the final scene is called.

  • Result-writer

We split the scene into three: Lobby, Main, and Result. We made a feature where all the writer functions are called when the Result scene is loaded. So far, we have a dictation writer that writes all the script that a user mentions during the experience; a video that records the user’s performance.

  • Classroom environment

We created a classroom with some virtual audience(without any animations) in order to get a “look and feel”.

 

One of the challenges that we are facing is Kinect IK. When it loses its track, the movement of the avatar gets weird. for next week, we will try implementing mix-cast by using ZED camera.

 

Week 3 – Critique, Rap-Proto and Outreach

[Production]

After testing with hardware and brainstorming on experience design ideas from last week, we organized our ideas and capability more into structures.

We presented our design proposals to clients, got feedback and sorted out our next pathway for improvement.

Meanwhile, in order to get real perspective from prospective users, we did a user interview with one of our clients’ teaching assistant, who thought VR might be an innovative way for her and her peers to train themselves. We had her play on selected prior products; while she was pleasant with the virtual environment, she expected to have more interactivity and feedback throughout the process.

Next week we will continue analyzing student presentation examples and sort out the algorithm for providing final reflection to the guest. On the other hand, the team will explore on real-time interaction with virtual audience and the effectiveness of user avatar.

 

[Tech]

For this week, we did:

  1. Gold spike for potential devices
  2. Design a system architecture
  3. Implement a rough version of event manager system

<prototype video>

Above is the prototype that we have implemented so far as a gold spike. The prototype has three features: voice recognition, head track, and body movement. For each feature, we used Watson API, Oculus headset, and Kinect.

We also designed a modular system architecture where all the components are independent and allows add more components later on for the future implementation.

Our focus is making the system flexible and modular so it could be extended in the future research work. We decided to create an event manager system in order to make features independent. It is also helpful for implementing real-time feedback.

One of our client suggestions is creating different modes before the actual experience, so we added a controller that allows a player to select a mode (e.g. practice mode and challenge mode) and loads the matching scene or changes the modules/ multipliers.

So far, we have implemented a basic structure of the event manager system and able to store dictation data locally as a text format. For the next week, we are planning to implement the entire system pipeline with voice input module.

Meanwhile, two of our teammates, who are pursuing to be rapid prototypers, are working on their round 0 and planning to present their output this coming Tuesday.

Week 2 – Responsibilities, Outsourcing and Project Scope

This was the second week of our project.

[Production]

At the beginning of this week, we created a RACI chart to separate our fundamental tasks into categories and assign responsibilities. Mainly as technical development and user experience design modules.

On the one side, we participated in the ETC playtest workshop, brainstormed user experience goals and compared possible features.

On the other side, we started learning and creating small prototypes in Unity on some public speaking assessment features such as eye gazing.

We discussed with our clients in terms of rubric to assess public speaking performance, and benchmarked more peer products with VR devices. We decided to have our deliverables as a playable prototype and a website for user data visualization.

At the end of the week, fortunately we met with a machine learning expert LP Morency at CMU CS school, in which we gained some potential methods of applying technology to our user experience design.

 

[Tech]

Since the experience should also be used as a research tool, we designed our code structure into two big parts: a research tool and a VR experience. In terms of evaluation, we selected three components where the data set is able to be gathered and quantified: hand gesture with leap motion, head motion tracking with Oculus, and voice recognition with Google speech-text API.

For this week, we developed a rough prototype that gathers and stores head motion tracking data. We are also looking into voice recognition APIs and trying out Leap motion in order to understand the technology.

 

Week 1 – Research, Brainstorming and Client meeting

This was the first week of our semester-long project.

[Production]

We met with our project client: Kim Hyatt, Associate Teaching Professor at Heinz College, CMU and Dave Culyba, Assistant Teaching Professor at ETC.

We went over the general goals and request of the project, as well as Q&A about some concept of technology and experience. After the meeting, we received research materials such as machine learning papers from them, and started reading, benchmarking prior products and extracting valuable parts.

Meanwhile, we have set up weekly meeting time with clients and advisors, as well as our core hours at ETC project room.

Technical wise, we set up our computers, requested testing devices such as Oculus Touch, Leap Motion, etc.

Internally, the team met together to discuss our capability and contribution goals for the project, and then set up a basic structure for team organization.