Week 2: Effective Planning and Prototyping

One of the key foundations for any successful project is building a plan which accounts for surprises. For Project Hindsight, we aim to create a plan which accounts for an experience that has unique facets, such as the merging of live-action and realistic interaction with CG objects.

The challenge with this type of deliverable is that it is neither a film nor a game, hence even the pipelines and workflow have to be entirely unique to reflect this project, marrying the best practices from both worlds to inform our planning choices and decisions.

Our Approach

Our goal is to develop rapid prototypes in a weekly and bi-weekly schedule that enable us to explore the different challenges and possible solutions. These prototypes will be self-contained, with each having different test goals and challenges which we want to overcome before our principal photography phase later in March.

In this blog post, I will be delving deeper into our first prototype, provided with a technical analysis by Sunil Nayak, Project Hindsight’s programmer and sound designer.

Human Tripod Prototype One, titled “Igor”

Week 2 Prototype: Lessons Learned

The goals for week 2 were as follows:

  1. Develop a VR environment with a 360 video and put in 3D models (cans of soda) that appear to be a part of the environment
  2. Add in Spatial Sound.
  3. [Stretch Goal] Oculus + Touch integration

A lot was learned while we attempted to do this. Primarily, the lighting was a big issue and there was always a problem with the lights being either too bright for the video (but not bright enough for the 3D models) or the opposite (and even worse for the 3D models). Shadows are necessary, but with the former the shadows disappeared (lights were too bright), and with the latter, everything was too dark. Directional light was tried to light up the video sphere, but that was also being horrible.

Fix for Directional light
We just used an unlit material which responded to shadows to try something with it, so lights weren’t needed for the sphere at all. Just needed for the shadows.

Fix for the lighting / shadows of 3D models
Separate it into layers, light the 3D elements separately on a layer of their own (and nothing else) to make sure they are as bright as they need to be. On another layer, light both the video sphere and the 3D models but barely, to make sure shadows show up and the video isn’t too bright.

Also, use a script to switch on the lights when the corresponding lights switch on in the video to make sure the shadows appear when they should and not all initially. A LightManager script was developed for this to handle events that occur with the video (time based).

Spatial Audio Issues: There was no way to bring in 4 channel audio straight into unity and make it spatial, so the 4 channels were split into 4 different audio tracks and 4 speakers were placed at quadrants and the audio was assigned to them to make the audio spatial. This way, rotating the speaker setup would be the fastest way to tweak the direction of the sound. ALSO, this way, all Mono tracks like the music and so on need just be placed on the main player game object’s audio source.

Oculus Issues: Turning positional tracking off on the Headset meant that the Touch controllers weren’t being tracked either. Positional tracking is currently on now, and we are attempting to find a fix for this, as 6DOF in such a space can be fatal. Also, the sphere was too far away for touching anything in the video, so the size of the sphere needed to be fixed in order to achieve whatever was needed from the Oculus and the Touch. Also, due to positional tracking being enabled, the Oculus now has to be at a particular distance from the sensors for the prototype to work.