Embedding Choice into the Environment

In traditional flat-screen variants of this genre, there are many ways designers have created user interface for choice into their narratives. These designs almost exclusively rely on the ability to cleanly and easily overlay text or images over the action.

One common tactic is the dialogue tree. In this, the player chooses between a set of responses for a character to use in a conversation. These will overlay either around the character’s head or in a set place on the screen completely divorced from the action in the story.

A dialogue tree from Tales from the Borderlands

Another common implementation is the QTE (quick-time event). Here, the player is asked to quickly press a button or one of several button options to command a character to complete an action. Often, these will be during moments of high intensity.

A third way of creating choice is by allowing players to use in-game character actions to manipulate the world of the story. This could look like a character choosing to put something in their pocket, take a pill, or write a letter. These differ from QTE’s in that they are not prompted by an on-screen button. Instead, they are available at the player’s discretion.

For all of these options, there are two features of their design that did not translate well to virtual reality. First, they all rely on some kind of text or image overlay. In VR, these break immersion. Second, they all have the player directly controlling the character’s actions. This control over another being in virtual reality is challenging because of the sense of self.

Manipulating the world in Heavy Rain

We chose to have players branch the narrative by affecting objects in the environment of the scene instead of manipulating the actions of characters directly. To this end, we filled our room with 5 key objects that change the story. Each one has two states of being. They are:

 

  1. Cabinet Padlock (Locked, Unlocked)
  2. Phone Line (Connected, Disconnected)
  3. Safe Handle (Intact, Broken)
  4. Window Blinds (Open, Closed)
  5. Gun (Loaded, Unloaded)

When a character needs to use one of these objects, depending on how it is set, the story would diverge. For example, at one point a character attempts to make a phone call. If the player has before this moment disconnected the phone line, the character cannot make the call. Every one of these objects is available from the beginning of the scene to interact with. The player can interact with each object until the character needs to use it, at which point it locks in and can no longer be toggled.

Pacing Choices

We tried two different versions of our interaction pacing. One version had every object available from the start of the scene (as described above). The other version had the interactions paced out throughout the narrative one at a time. In this version, when one object would lock in, the next would become interactable and would remain that way until it locked in.

Players who experienced the “all-at-once” build of the game often noted that they struggled to note the impact of their decisions. For instance, they understood logically what loading or unloading the gun would do, but they didn’t feel the weight of it when they toggled the object. This is because they would make the choice, but then they may not see the outcome for several more minutes until it locked in and became relevant in the narrative. They stated that they wished they only had the next object available instead of all of them. However, the “one-at-a-time” build did not solve this problem. These players were even less engaged because they didn’t have as many options to consider at any point in time. Ultimately, we learned from this that the real problem lay in the way we distinguished and distributed the opportunities for interaction in relation to the delivery of the narrative. For more information related to this and potential lessons, see the section titled “Perception of Narrative Pacing in VR vs Flat-screen”.