Thoughts on our second school playtest

The big event for this week (outside of halves on Monday) was our playtests with the 7th grade students at Cornell on Thursday. We came fully equipped with a new environment, a functioning three team scavenger hunt, and embedded analytics to try and track the movement and decision making of the students as they explored. As this is a another blog post rehashing our learnings, it won’t be as bright and colorful as some previous, but it will be interesting! That being said, we gained a LOT of research from these playtests that I’ll try to spell it all out here starting with the positives.

So, starting with the things that went well, there were a lot of subtle improvements since last time. Compared to the 10-25 minutes it took last time to get the students into the headset, that amount of time significantly dropped to around five minutes and thirty seconds per class. Also, the amount of people that felt dizzy was reduced by a few since the last visit, and the amount of people that found the headset uncomfortable was reduced by half. The pause and reset features worked like a charm again. Pausing the experience instantly made the boisterous classroom quiet, and reseting the experience made all of the students wind down and take off their headsets. Also, although somewhat disorganized, our entire experience flow worked. From getting the students into groups, to moving them into the headsets, to doing the scavenger hunts, to taking off the headset and discussing what they learned in the experience. The strongest thing that worked in the playtests was, actually, the post-VR discussion that Susan led with the students. She asked each individual team what they found for their hunts and related that to bigger concepts that they were learning. It was great to see the students faces light up when they realized that all the producers that they found in the environment were all plants that used photosynthesis and other such connections. I think this proved our hypothesis that what we are making can be a great tool to help facilitate teachers with concepts in their classes.

Now, moving on to the things that went a little less than perfect. Although I said that there was a drop in the amount of people that felt dizzy, there were two students that felt particularly dizzy; both of which had to take off their headsets and go to the bathroom to regain their composure (very much not a good sign). We may try to find a work around for people that are particularly prone to dizziness by maybe giving them an iPad instead of putting them in VR. That may be a little out of scope for our project though.

We also realized that Susan wasn’t using the iPad for control as much as we thought she would. She seemed perfectly content sitting on one teleportation spot with a view of birch trees and honeysuckles as long as she could read the notifications from the notification tab that the students sent her. This is very different from what we pictured of the teacher wanting to jump around to different views and keep track of every single student in the environment visually. I’d be interested to see what happens when we come back, because I think Susan was still learning the technology.

Although all the students technically completed each scavenger hunt, there were a lot of things that went somewhat poorly. We wanted to illicit competition amongst different teams, but we realized that we actually created a lot of tension and competition within the teams themselves. The way our scavenger hunt works is that if one of the team members “solves” a clue in the scavenger hunt, it solves it for the entire team. Some students felt frustrated when they were so close to solving a clue and then one of their friends solves it right before they do. On the flip side, some faster students took advantage of this, trying to beat out their fellow team members to solve the hunts.

On top of this, our clue solving system is a little easy to take advantage of. The way it works is that if you open the menu, there is a flag tool. If you click the flag tool and then click the object the the scavenger hunt is referring to, a diamond will appear above that object indicating that it is solved for that team. In our perfect world, students would jump around reading blurbs, and through their readings, figure out what objects the hunts are referring to THEN click the flag tool to mark that object. In reality, some students followed this pattern but others just opened up the flag tool and clicked around the environment mercilessly until they accidentally clicked on the correct object the scavenger hunt was looking for (and therefore learning absolutely nothing).

Some students needed extra help getting into the experience, some students phones would heat up and need to be turned off then back on again, and some students just took off the headset from time to time just because. What this meant is that team would be down a player for a little while, creating a break-down of communication, which is, unfortunately, something we were hoping to facilitate. This, along with confusion and some other factors, led to only two or three of the sevenish players on a team completing the entire scavenger hunt for the team.

Now, I’m going to jump into some things that we learned from the analytics. Our information isn’t perfect, as we were only able to track the first 100 actions of each student, but the data still tells a really interesting story. First of all, our scavenger hunts seemed somewhat unbalanced across the teams. The red team finished first for both groups (in two minutes and fifteen seconds for the first group and one minute and fifty seconds for the second). The blue team finished second in both groups (three minutes and fifteen seconds for the first group and three minutes and thirty eight seconds for the second). And the green team finished very far behind the other two for both groups (six minutes and fifty two seconds for the first group and five minutes and nineteen seconds for the second group).

Why did this happen so consistently? We’re not totally sure, but we have some ideas. One of the clues for the red team (the fastest team) is, “Find three tertiary consumers/apex predators.” We think this clue is particularly easy because the three apex predators we put on the map (hawk, bear, and wolf) stick out and are objects of interest in the environment. It only took between nine and thirty six seconds to find each animal. On the flip side, one of the clues for the green team is, “Find two primary consumers.” There are three primary consumers in the environment: a deer, a butterfly, and a caterpillar. Unfortunately, both the caterpillar and butterfly are quite small and blend in to the surroundings. It took the two green teams a whopping three minutes each to find them, inflating the amount of time for their scavenger hunt.

We have a couple solutions. One is to switch around some of the clues for each team to give a good mix of hard and easy clues. We will also try to add one more clue per team as we think that the perfect time for our scavenger hunt experience should be around four minutes. We will also make the caterpillars and butterflies more apparent by animating them, increasing their size and moving them to places they stand out more.

The data gets even more interesting as we go in to an individual level. I want to go through a few different cases just to give an idea of how each student approached the game (I’ll keep the names of the students anonymous) .

One of the most successful students (he solved 4 of the 6 clues for his team) played very quick. The amount of actions he took per minute was a lot higher than most of his friends. First, he teleported around the area to get a good lay of the land. Then, he started opening blurbs and reading them once he felt comfortable with the map. By the time we started the scavenger hunt, he was able to solve the first clue in eleven seconds, and kept up the pace throughout.

Another successful student followed similar patterns. Although not as fast, this student would teleport, stay on a spot for around ten seconds (presumably looking around that particular area), read a blurb or two, then moved to another spot. If she found a blurb that correlated to the scavenger hunt, she would take out the flag tool and mark that object. This is the type of learning curve we were hoping for going in. She was able to absorb information and then use it to complete tasks.

Some students were accidentally successful. This one student teleported in front of the gray wolf right as we started the scavenger hunt. According to the data, she instantly took out her flag tool and marked the wolf (the correct solution to her scavenger hunt) without opening the blurbs to read anything about it. This is puzzling and leads to one of two potential situations. Either she knew previously about gray wolves and instantly connected her prior knowledge to the scavenger hunt clue or she randomly wanted to test out the flag tool and happen to accidentally click on the wolf in front of her. Either way, this actually solidified how the scavenger hunt worked for her (or so it seems) because she was able to solve another scavenger hunt clue later in the game.

Some students did not play the way we hoped. This particular student was unsuccessful in finding anything in his scavenger hunt. The data says that at first he teleported around and clicked on some objects in the environment (although only for around 1.5 seconds each). Later, It seems that he got bored with clicking on objects and instead just teleported around to different locations. He didn’t once use the flag tool and completely stopped clicking info blurbs after a certain period of time.

One student’s data is disastrous for us, but oh so interesting. Now, keep in mind, this student was technically successful, finding three clues for her team, but I’m not sure if it was intentional. According to the data, this student did not teleport EVEN ONCE. This means that she stayed on the starting area THE ENTIRE TIME (or at least for the first 100 actions). I have a feeling she didn’t totally know what was going on. She clicked on a blurb and left it open for fifty eight seconds. She then clicked on another blurb for 1.5 seconds then goes back to the same blurb for another fifty six seconds. As soon as we started the scavenger hunt, she was the first on her team to solve a clue. She then found the third and forth clue on her team. That being said, she never clicked on the blurbs of her successful clues nor teleported. This leads me to the conclusion that she just opened up the flag tool, and started machine gun clicking on everything in the environment that she could see. I’m betting it was pure luck that three of the things within the starting view happened to correlate to her particular scavenger hunt (mainly trees). If we want to make our experience meaningful both in terms of learning and collaboration, we really can’t have instances like this happen.

Our team has set up meetings with teaching and game design specialists at our school to help aid a lot of our problems, as we’re starting to move into territory that may be a little out of our team’s personal knowledge pool. It was calming to talk to our advisor, Scott, who pointed us in the direction of a lot of research that might help us. He also reassured us that it probably wouldn’t be possible to get every single student fully interested all the time, and that isn’t a problem we should be trying to solve (just look at college seminars, can’t even please every student at that level of education). Even with all the good feedback from halves, we have a long way to go!

Until Next Time!