Every semester the ETC hosts a playtest day. For 2 hours, in 20-minute intervals, participants come through the project rooms to test out builds and give project teams much needed feedback on their work.

This was also the week that our programmer, Zoe, was at Unite, a conference for Unity developers. While this meant that she was learning much about how to improve our game, we were under a hefty crunch to get our first real build working.

What We Tested

We were having technical difficulties that Zoe couldn’t fix in time, so we made a decision to only test the first five minutes of the game. This proved to be smart, since we only had 20 minutes to get two players through the experience.

The animation we were using was the raw animation from the motion capture. Sharon is still in the midst of cleaning up the scenes (we don’t expect those to be done until the end of week 13), but the measurements we did in the studio were so effective that there was very little tweaking we needed to do to make the scene clear.

Ultimately we took the player through nodes 1 & 2, and then asked them questions related to the following subjects: 1) Story comprehension, 2) interaction comprehension, 3) comfort with character and environment design, and 4) their experience with various forms of entertainment (video games, film, theater). The main purpose of playtest day was to ensure that the interaction design was on the right track, and to get a sense of how previous experience with entertainment of various forms impacts their approach to exploring.

Who We Tested

Brad (left) guides his Grandmother (right) through the experience for the first time as part of playtest day. She had never been in VR before and needed to be talked through the entire thing.

The target audience for this project is incredibly specific: Since we are exploring the potential of designing 3rd person branching narrative games in VR, we knew we were mainly aiming to please fans of 3rd person narrative games. After all, if the players of our game don’t like the genre in which we are working, their overall satisfaction in our product carries very little weight.

While plenty of the people who come for Playtest Day enjoy video games, we knew we were more likely to get a number of players who had never touched a videogame before, let alone a narrative one. This was another reason why we wanted to see what they did with the first two nodes – we didn’t care as much if they liked what they were playing, but instead whether they understood the mechanic used to play.

We had a variety  of ages who came through our experience: Two 14-year-olds, two 15-year-olds, two 16-year-olds, 7 adults. Given that our script contained guns, (off screen) character death, and a bit of cursing, we adamantly did not want anyone 13 or younger in our headsets. That being said, we knew when we got the 14-year-olds that developmentally they might still be a bit too young to be part of our target audience.

Some of the players had been in VR, some had not. A few had done immersive theater before, about half had played narrative games, but all had valuable perspectives.

What We Concluded

One of our adult testers, a teacher who was in VR for the first time

Contextualization & Player role

We introduced the experience with a summary of what the introduction to the game would look like: “Instead of seeing the big white room you’re seeing, in the final version, you will be standing in a movie theater next to a glowing blue projector. By clicking on the projector with your controller, it will begin projecting a door onto the movie theater screen. You would walk through the movie theater screen to begin the experience.”

While everyone clicked the blue objects, gamers and non-gamers alike said the experience was most similar to a film. One tester admitted after we asked about immersive theater that it was probably closer to that than film, but until that moment – due to the intro we gave – he addressed the entire experience as a film.

What the film contextualization does is it tells the players that much of the Presence experience is one where you sit back and watch. It answers the perpetual “who am I” question quickly and effectively. You’re watching a film, you have no identity, and that is perfectly fine.

Avoiding false heuristics is paramount

We are not giving instructions, but instead moments that teach the interactions. We felt any floating text or overt instruction (upon which many screen-based games rely) would break immersion, so we had to create moments that encouraged interaction so the player could learn. Collider size actively ruined these moments!

Players would go to click a blue object, it wouldn’t react, and their brain would immediately decide that clicking the blue objects was not the right move. Then they would click it when the object flashed – getting closer this time – and the interaction would work, leading them to think flashing meant interactive.

This meant that we needed to both make collider size much larger, but also create some kind of signaling for when the player hits that collider with the controller, so that they know they are close enough to click.

give the players space!

One of our playtesters reacted poorly when he unlocked the door and it opened on him

Every one of our players had the same experience: They stepped forward to open the door, and when they did, the door immediately opened in their face and the characters walked right through them. Some people jumped, some screamed, some did a little dance like the boy above, but every player was incredibly uncopenomfortable with their personal space being violated by a virtual thing.

We had many discussions on how to deal with this. Once we get past the door opening, the players would mostly stand exactly where we wanted, but what about their trust breaking in that first moment? And how do we make those players who don’t follow our stage directions more comfortable?

Answer: We will be delaying door opening, and signal to the player that it is about to open through some sort of “I got it” dialogue from the characters. We will also create a laser pointer so the player can select the objects from any distance. That way the player can position themselves wherever they are most comfortable, and not feel discouraged when a character stands between them and the object they would like to select.

Not all objects are clear

We are still unsure how to telegraph to the player that the safe is locked or unlocked. It is something we are struggling with. Does breaking the safe make it openable? Well, to some players, that’s what it means. To others, it means what we intended. We are going to try to find a solution in the textures to telegraph this better, but we may have to explore other options.

The other object people did not quite understand was the key to the filing cabinet. The key sits in the filing cabinet, and when the player clicks it, it disappears. This makes the player think that: 1) the cabinet is now unlocked, and 2) the player is carrying the key, neither of which is correct. They then think, “Oh, the characters are looking for a key, this must be the key they want.” The whole thing was just SO confusing, and was a very easy fix: We replaced the key in the cabinet with a padlock that can be opened or locked. No more keys, no more confusion!

What’s Next:

We are now preparing for a main campus playtest (likely our last test on non-ETC students before the ETC Festival). With the permission of Professor Jessica Hammer, we will be using the Oh! Lab in the department of Human Computer Interaction. This will hopefully get us a very different group that will be a little closer to our target audience. We will use the coming weeks to determine and build what we want to test on them.

See you next week!

Categories: Uncategorized