Based upon our meeting last Friday, this week we will continue to clarify our primary focus of our project and work together to brainstorm and pitch more design ideas. Additionally, our programmers will continue to research our tech in order to see what kinds of SDKs for facial recognition and AR will work best for the project.

The Work This Week

Client Meeting

After our meeting last Friday, we organized a list of questions we wanted to ask our client, in order to clarify the aims of the project:

    • Why do autistic individuals avoid eye contact with other individuals

    • What is the utility of having a “translation app” for autistic individuals

    • What is the demographic for this app? what age range of kids?

    • Can we pare down the current amount of emotions to 6 universal emotions?

    • What are the most popular 3rd party games that autistic children engage with that are most successfully in contextualizing interpersonal interactions/situations?

    • Are emojis effective in communicating generalized emotions?

    • is animation an effective tool in relating the meaning of situations?

    • How to effectively evaluate the app?

    • What are your expectations for us this semester?

    • Do we need to do playtesting/data-collecting with autism kids, or we just need to build app?

    • How often do you think you need to meet with the team?

In response, our client gave us the following feedback, which was mostly oriented around issues in game design and broadly who exactly our game was for: 

      • It is better to be a mobile game for the sake of popularity and easiness of access.

      • It is a good idea if we can use facial recognition tools like ARKit. This can be helpful in perspective of fund raising.

      • Our demographic is everyone, but specifically those with social anxiety/impaired social skills.

      • Our measure of success is that guests have fun, but also improve skills (still using the transformational framework).


Aside from our meeting with our client, which helped to clear up a couple of questions about our project (mainly, how to narrow our platform with regards to what she wants and how to orient our transformational goal), we did quite a bit of research into both design and tech.


The majority of our design research this week was centered mostly around about social intelligence and social games, and so we did a lot of research into transformational-esque games that dealt with social emotions and facial expressions.



From this research, we developed a couple of design pitches that we felt would deal with our transformational goal of getting individuals to be better at recognizing and expressing emotion:

  •  A Mafia-esque game that is a combination between the “Mimic” game. Players are put into a Mafia-like scenario where, during a simple narrative of some kind, players need to provide the “appropriate” emotion during the course of the game. The phone is then used as a sort-of lie detection tool to help players suss out who is a Mafia member and who is innocent.
  •  An AR app similar to the VR app “Dr. Freud” where two players connect to one another over the internet (they don’t know each other). One player details what’s troubling them, while the other only provides questions. This will be asymmetric, as the one who is detailing their troubles will be able to see the video feed, while the other player will not.
  •  A Snapchat-style game that uses the mechanics of the game “Face Dance,” wherein people use their expressions to play a rhythm game of some sort.
  •  An AR experience based on the Social Stories framework that tasks the player with expressing an emotion to proceed in the narrative.


On the tech-end, we looked into a couple of SDKs for both web camera and for mobile.

  1. Affectiva is a Unity-compatible facial recognition SDK that runs off of web cameras (or really, any camera that doesn’t specifically have depth capabilities). We are in the process of creating a small prototype in Unity in order to test Affectiva’s effectiveness, which the app itself purports as having some of the best facial recognition accuracy on the market.
  2. Spark AR is Facebook’s in-house AR SDK, which can run on apps like Messenger and Instagram. What appealed to us about Spark AR was not only the simplicity of the interface, but the potential for networked experiences due to running on apps like Messenger that have that kind of functionality built in. However, it appears that Facebook has closed development with Spark AR on Messenger for the time being, meaning that we unfortunately can’t take advantage of it.
  3.  ARKit is Apple’s native AR SDK for iPhone X and above (due to it taking advantage of the depth camera added after the X was released). As this is the client’s preference, and the SDK used by the team over the summer, it definitely merited attention from our programmers.

Plans for Next Week

Based off of the work this week, we will continue to refine and reduce these pitches down to a few that we can conceivable deliver at the end of the semester, while tech will continue to research what SDK works best and what kinds of mechanics are feasible using AR and facial recognition.


Comments are closed