Postmortem

 
Now that the Illuminate project has come to an end, here are a few of our reflections on our process and other lessons learned.
 

What Went Well

 

Early feedback

In weeks 3 and 4 of the project, when we knew how our design was shaping up but hadn’t yet hit Quarters, we solicited feedback from our game design faculty, educators, and experts in our demographic. Having this feedback so early in the project gave us excellent guidelines for our decision-making process throughout the semester.

 

Appropriate difficulty level of the game

Doing a game that could be considered rather puzzle-oriented for this age range was a risky proposition. Based on our conversations with educators and psychologists who had worked with children at our target age, we were aware that our kids’ ability to plan several moves ahead would be limited. We therefore were very cognizant of this in early designs and playtesting and we worried that the game’s levels might quickly become too difficult as they ramped up in complexity of structure, but we kept iterating on the level designs until we found a difficulty that felt good for the children playing the game.

 

Adjustments for Clarity and Intuitiveness

While we delivered a strong product this semester, it took awhile to make it clear and intuitive, and this probably remains an area that could be further improved. The biggest example of this is our checkpoint system: the energy balls that the player must align with the blocks in order to collect energy to power the spaceship. This mechanic was initially very abstract and unclear. Adding “gems” (the circular shapes on the blocks) definitely helped, as the shape-matching aspect really helped kids make the intended connection. After this change was made, the kids’ ability to progress through the game without outside assistance radically improved.

In some cases, children seemed to clearly understand the purpose of the energy balls based solely on the visual cues we provided without any other explanation. But in other cases, children got the shape-matching and were able to carry out the action but without a clear indication of whether or not they really knew why the energy balls did what they did. So it is possible that more clarification via the artwork could still be done to improve this area in the future. However, as it stands, the game made huge strides in clarity and intuitiveness through the course of the semester, and there was a night-and-day difference from the early playtests to the later ones.

 

Tuning of the Physics Engine

Although we are using a pre-existing physics engine, we still had to do quite a lot of mechanics experimentation and tuning of parameters in order to arrive at the feeling of the final game. Early on, we spent a lot of time working with different movement mechanics to see which way of moving blocks felt best. At one point, we also rebuilt the scale of the world to correct a problem where blocks seemed to float.

Once we settled on our drag mechanic, we had to make sure that we kept a tactile quality to the blocks. To this, we adjusted the friction and resistance so that players playing the game felt like they were actually moving blocks around. Between quarters and halves, especially, and throughout the semester, we spent a lot of time tuning number values for the mass of objects, the strength of the earthquake in each level and so on, in order to create the right feeling and behavior. That attention to feel and to behavior definitely paid off as the naturalness of how things feel has received a very positive reception from both kids and adults.

 

Fun and Engagement

Our playtests suggest that we have succeeded in making a game that kids in our demographic find genuinely fun to play. Our Nov. 19th playtest at the ETC, especially, gave us great indicators of the game’s strengths through the kids’ continued engagement and enthusiasm throughout a lengthy play session (45 minutes).

 

What Could Have Been Better

 

Make Design Decisions Earlier

Often we were found in design-loops where we wouldn’t just make a decision and go with something, but instead talked about one particular area, and usually in high-detail. We would even sometimes come back and re-visit an area that we had already discussed and decided on. This process isn’t ideal for a shorter project, as it fits more to the style of a studio.

 

Team Structure

While we had a terrific team that worked very well together, the complexity of the project would have benefited from a better split of leadership roles than we had. It was very hard to serve as both the producer and the lead designer on a project that involved this much coordination with outside parties–and that double role was only possible thanks to a supportive team that always stepped up to find areas to contribute.

While normally co-producing is not necessary on small teams, this team might benefit in the future from having an external producer (to handle communication with collaborators and clients, set up meetings, arrange playtests, etc.) and an internal producer (to focus on the schedule and tasks for building the game, prioritize features, etc.), each with a secondary role of some kind, such as design, art, programming, or sound. Alternately, the team could retain just one producer, but with that as the student’s only role.

 

Lessons Learned

 

Set Up All Playtest Dates At the Beginning

Since we are dealing with young children, the process to visit schools is very long and takes a lot of time to get the paperwork and clearances required. Even after the clearance was taken care of, it still requires quite a bit of logistics to get into a classroom, due to the fact that we would have to find a time to visit that works with their lesson plans, rather than impose on them. If we had planned to visit certain classrooms early in the semester, then being able to test in a classroom would have been much more likely.

 

Physics Simulations Aren’t Good for Predictable Outcomes

Using a physics simulation to teach a physics-based concept sounds like a good idea, and to some extent it is. But the physics simulation should not be relied on to produce perfectly predictable outcomes. For example, while building the assessment mini-game that HCII requested, where the child picks which of two towers is more likely to fall during an earthquake, we learned that the Unity physics simulation is non-deterministic due to floating point math over which we have no control. Even with a perfectly consistent input from us, we could not guarantee a consistent output across different platforms. This is not an issue for most games because the physics simulation is used to create emergent behavior where a certain degree of randomness is actually helpful in making things look plausible. But for the multiple choice scenario that HCII had requested, Unity’s physics simulation was not an apt tool.

We explained this problem to HCII and received the go-ahead to force the outcome by making one block heavier and holding it down with a spring joint, so that it would shake and have some room to move but could not topple. This solution ultimately worked quite well. Neither the difference in mass nor the spring joint have been noticed by the naive viewers we’ve asked about it, and the result seems believable enough to not mislead kids about how objects in the real world would behave, while still serving HCII’s need for a guaranteed outcome.

 

DataShop Assessment

Events that occur move-by-move are better than real-time for DataShop because real-time actions can’t measure cause and effect as discreetly or the motivation for the action as purely. How can we determine overall success if the success of a single interaction cannot be easily judged?

 

Depth good for learning and data collection, breadth good for solving design challenges

Although our game has two modes, the progression within each mode through a series of increasingly difficult levels means that this is ultimately a game that focuses more on depth than on breadth. We made that choice deliberately, since the progression of levels allows for the gradual introduction of concepts that can then be expanded on with several variations before the next concept is introduced. That gradual, linear progression can be great for learning. Also, the repetition of similar actions across multiple levels is good for data collection because it gives something to assess for changes over time.

However, more breadth–which is to say, more variety of game modes with fewer levels–might be better for solving design challenges because it gives multiple avenues through which to attempt to achieve the same objective. If some modes, or some combinations of modes, are more educationally effective than others, then you can discover that and focus thereafter on the modes that are working best.

Since this is a first semester project, we knew our game might be expanded upon, but it also had to be fully complete. So we could have made a decision in either direction (more depth or more breadth), since one side helped to make the game more complete and one side more expandable. Ultimately we found a reasonable balance between the two. But if we erred too much on one side, it was probably toward depth, and it wouldn’t hurt for next semester to contemplate a broader palette of simpler games.

 

Conclusion

This was a strong start to the ENGAGE project here at the ETC. As this is the beginning of a multi-year project, next semester’s team will take what was learned this semester and press forward. As to this semester’s achievements, we’ve developed a fun game with opportunities for learning which will serve as a good opportunity for HCII to begin testing. Between our experience on the project, our playtesting conclusions, and the game itself, we hope this semester will have served as a good foundation for next semester’s design decisions.

Go to Top