Week 7

Week 7

This week we made progress on getting the entire pipeline working from 2-D Videos to animation data, and furthermore bringing this animation data inside Unreal to create animations. We also finished implementing the core mechanics of our puzzle prototype. We recorded videos of people doing movements to test ideal conditions for recording videos. On the design front, we came up with a design direction for our storytelling prototype and playtested with our peers.

PROGRAMMING

Our machine learning engineer made progress on the Video-to-Animation pipeline. The machine learning algorithms involve really heavy computation. We had set up a local machine with a powerful GPU to speed up processing. However, even such a powerful setup did not produce acceptable results. For this reason, we decided to use a Google Cloud Virtual Machine to do these Machine Learning computations. The results with the Google cloud VM are much better.

We recorded videos of our peers doing movements in different conditions. We tested with different lighting, different attires and with different types of movements and gained valuable insights about ideal conditions to shoot videos. The most important are:

  1. Wearing badges or clothes (like skirts) that fly around produce inaccurate results. The algorithm might mistake them for limbs and in turn produce incorrect orientations and inaccurate pose estimations.
  2. Shooting in poor lighting produces inaccurate results due to bad pose estimations.



We also made significant progress on our puzzle prototype. We now have a character that can move around in the world and can perform 4 different types of actions: i) Jump, ii) Crouch, iii) Push, iv) Pull. Right now these actions are preloaded animations and the game does not take user generated videos as input. This is not a problem because right now we want to build a prototype which tests the puzzle solving aspect and gives us insights about the user interface requirements. By building this prototype we realized a problem which may arise if we build this game further. This had to do with root motion animations, especially animations like jump. Such animations might be a problem for a game which takes videos of movements as input. For example, it is difficult to determine if the video of a jump is high enough to get past an obstacle and might frustrate players who thought they deserved to get past the obstacle. With this is mind we asked our client if we can fake some of the interactions, like jumping, to make the game more forgiving. Our client said it was acceptable to fake such interactions to make the player experience better. Here is a video of the puzzle prototype in its current state.

After the Motion Reconstruction phase of the ML pipeline is completed, it produces animation data in format of a text file. This text file is similar to a .bvh file; It consists of two parts: 1) Skeleton joint information and 2) Orientation of those joints for each frame. In order to bring this animation data into Unreal Engine and recreate the animation, our programmer had to write a custom file reader. Here’s an example:

DESIGN

Our design team worked on creating the Storytelling experience and did a physical exercise to playtest it with other project teams. This experience was heavily inspired by Mad Libs. We got some good feedback from the playtests. The best part about this experience is that people enjoyed recording videos of their movements and their friends enjoyed watching them do funny movements. This solves one of our major design challenges: Player motivation to record and upload videos. The result of the experience was unexpectedly funny and the participants liked the pay-off in the end.