3–Person Blocking for a 2–Character Story

Typically, motion capture for a video game cutscene is done like a film with the camera in mind; however, in VR, the player is also the cinematographer. This meant the blocking of the scene had to consider where the player may stand at any point in time. Despite the fact that our story only has two characters in the room, it was directed like a 3-person scene for theater-in-the-round. Every node in the story had to create a stage-picture that implied where the player would feel most comfortable standing. Whenever a character moved to a new position in the space, they would indicate verbally and physically their intentions before doing so. This allowed for the player to adjust in anticipation of this action. In general, we thought this was a highly effective method for creating a VR scene with players and characters standing in the same space. Rarely did players unintentionally clip with characters and walk through them.

Communicating Movement Intention in Both Directions: Live VR Puppet Playtesting

We wanted a way to playtest our instincts in this area before doing the motion capture because we knew it would be difficult to change the blocking once that had been filmed. This is how we developed the idea of doing a live VR puppet playtest.

Halfway between a brown-box live playtest and a fully realized VR playtest, in this build of our experience, players stood in VR while two members of our team acted out the story using motion trackers. The live actions were then translated into rough characters in the game for the VR player to see. We could try out our blocking and rapidly iterate it across tests without worrying about the technology or motion capture data.

This playtest revealed two very interesting things to us, one immediately and one that only revealed itself after the full-VR version was completed. The first thing we discovered was that people responded very viscerally to seeing a digital character puppeted by a real human. Particularly, shaking hands with this figure was almost unsettling. This could be a fruitful place for further exploration

The second interesting discovery of this playtest was that it actually did a better job of implementing our blocking than the final version despite the fact that it was less refined than the final product. Our hypothesis is that this is because the communication regarding movement went in both directions. In the puppet playtest build, the characters could react to the positioning of the player because they were being controlled by actual humans. This was unlike our final product in which the characters could telegraph their movements to the player, but not the other way around.

Melissa (left) and Brad (right) suited up for playtesting

For future exploration, we think it would be interesting to create something that has the characters able to respond to the position of the player in the same way we tried telegraphing the character positions to the player. One possibility would be to motion capture the top half of the characters, but to animate their lower bodies dynamically. That way they could move around the room accordingly while still emoting based on the actors’ performances.