Week 4 – Design, Prototyping, Branding and Documentation

[Production]

At the beginning of this week, we discussed the product user stories together and created a chart to list out all the backlog stories. Then based on our capability and scope, we set out priorities and decided the scrum tasks for this week to work on for each team member.

Tuesday team went to Facebook Social VR event and talked to Dan in regards of avatar aesthetic and functionality in VR interactive scenarios. By comparing the pros and cons of each probable hardware and plug-in (Kinect, Leap Motion, Mixcast, ZED Cam), as well as the assessment of input-output data, we chose and ditch particular ones.

Then each member was working on distinctive tasks, in general

Yein was working on implementing a database, prioritized features such as uploading slides and testing on eye gaze/avatar motion rigging;

Magian was working on completing round 0 practice for learning unity interface and helping prototype implementation;

Makar was working on branding materials design, hosting team, client and advisor meetings, scheduling and documentation;

Rocky was working on research virtual character solutions and developing character generation pipeline;

Howard was doing gesture recognition research and testing, also brainstorming on virtual environment interaction.

 

[Tech]

prototype video

Over this week, we implemented the following features:

  • Pdf renderer

People should be able to practice their presentation with their slides. We implemented a function that reads the user-uploaded pdf file, converts each page of the pdf to the png file, compiles the png files to 2d textures, and applies the textures to the material of the screen. The user can switch between pages by using arrow keys.

  • Video recorder from the audience perspective

One of the results that we want to deliver after the experience is a video that is recorded from the audience perspective. We implemented this function by using Rock VR video. It starts recording when a user presses the start button and it automatically stops recording and saves the video as an mp4 format when the final scene is called.

  • Result-writer

We split the scene into three: Lobby, Main, and Result. We made a feature where all the writer functions are called when the Result scene is loaded. So far, we have a dictation writer that writes all the script that a user mentions during the experience; a video that records the user’s performance.

  • Classroom environment

We created a classroom with some virtual audience(without any animations) in order to get a “look and feel”.

 

One of the challenges that we are facing is Kinect IK. When it loses its track, the movement of the avatar gets weird. for next week, we will try implementing mix-cast by using ZED camera.