This week was all about how to get the person in VR in our asymmetrical co-op from last week to see the people around the display. The previous prototype was more about getting the two groups of players talking, but only one could see the other. What if they could BOTH see each other?
The previous week we had an idea about putting a 360 camera on top of the display, running that feed into the VR skybox, and BAM! Better than FaceTime right? So we decided to try it.
Implementing the 360 camera technology ended up being pretty straightforward. We were able to have the person in VR see everyone surrounding the camera (including themselves if they were in view of the camera – it was very trippy).
The next step was to implement it with our previous Mission Control prototype, to add that extra communication element to help the puzzle go faster. The ‘Mission Control’ side (the Voxon display) would be able to physically point at different objects or paths that they wanted the VR player to take, rather than worry about giving vague directions.
This worked out really well, and was a great morale boost for our team, but there are a few questions we still have for when we actually get the machine, as well as our learnings.
- What is the best way to represent the person who is in VR, in the Voxon display?
- Is having the full 360 camera view the ideal view, or should parts of it be obscured by say walls with windows to see through?
- How important is the small lag of the camera updating the skybox when people are moving around the display?
- Would post processing of 360 stream for distorted or filter effects add to the experience?
- Can use physical props to obscure the bottom part otherwise the virtual assets won’t mesh with the room environment
- Or can theme it narratively such that the VR tiny person is surrounded by fruit flies or they’re climbing a small plant, placing the 360 camera inside a real clay planter
- Windows instead of lights for cylinder to obscure PARTS of the 360 footage but only
- Body language for 360 is super important for communication between the player in an asymmetric format
- Can we add a kinect to map their movements to animation on the display?