Sparky, An Augmented Reality Navigation Companion
This semester, Carl Rosendahl, our client gave us a number of project parameters he wanted us to hit. He wanted us to create an augmented reality character that you could interact with using natural means of communication in a markerless environment. This project would also be done on an emerging platform, the BT – 200 glasses. Our team brainstormed many different potential project ideas given these constraints before settling on making a navigation companion.
This navigation companion would help you go from point A to point B and make the trip more enjoyable through animations and dialogue and more convenient since you would not need to be reading the list of directions from your phone.
The team was composed of one producer\designer [John Shields],
three engineers [Mohit Taneja, Shirley Park, Wei Guo] and one artist [Rex Hsieh].
We learned many lessons along the way which resulted in us having to really scope down our project Initially we wanted to build a lot more functionality into our companion, such that he could point out interesting landmarks as you walk past them, but due to the technological challenges, we were unable to tackle those additional features and had to keep our project highly focused on just the navigation.
Now we are going to dive into what worked really well for the project team. Our team had great chemistry and we were all really focused on getting our components done. We never gave up, even though we were dealing with emerging technology and had lots of components not work as intended or other limitations, we always kept moving forward and tried to make a workaround. We communicated really well with our client and really understood what he wanted out of the experience and tried hard to deliver on that. We helped each other improve our respective discipline’s skills. We were able to get press out of our project which is always exciting. We also took massive risks with scope, features, components, solutions and workarounds, and we had faith that they would work out with enough time.
Now there are some components that did not really go so well for this project team. We could have made the development environment a lot easier for other people to use which slowed us down. Just about everyone on the project did work that they were not comfortable with \ or were a specialist in. User testing came into the project’s schedule very late in the cycle. We really should have dedicated some time in the project’s infancy to getting feedback from users. This user testing would have caught a lot of surprises with the device’s capabilities. Scrum made task due dates really rough by nature, so we had to use a different management tool [Asana] in order to correct this problem. When we got a press event, only a few members on the team got to go to it, which affected the team dynamic. It really would have helped our project if we sat down and made an experience design roadmap for the experience, even if it was rough, because then we would have caught some of the problems that surprised us down the road as we tried to finish the experience.
Now to talk about some project-based insights that we gained over the course of the semester.
From a technical perspective: we learned that it is not a good idea to trust the hardware’s specification page. We had serious issues working with the device’s GPS and compass functionality. Early playtesting with the components would have caught them before they ballooned into huge problems. We learned that having a speech recognition timeout function worked way better than any algorithm that tried to determine a difference in decibel values. We also learned that setting up the context for the conversation and interpretation of the meaning helped a lot to improve the accuracy of the speech engine. We learned a lot about the technical limitations of the current state of the industry. For example, we learned that while computer vision can be done to determine bounding areas, it is too taxing on the current generations CPU to be used effectively with any other task. We also learned that people have a really low expectation of technology. For example, although our speech engine could handle listening to a full sentence, often times people would answer in just one word.
From an art perspective: we learned that natural communication tools work really well at conveying information to naive guests. For example, we used gestures that humans typically use when thinking or listening [though we did exaggerate them] which people intuitively picked up on. We also did lots of playtesting with characters since the character is the focal point of the scene. We settled on a robotic companion because we wanted a character that would be easy to model and robots appeal towards people interested in new technology typically. We also took the technological constraints of the technology and used art in order to spin it into a positive component. For example, since computer vision is not possible on the device, therefore we cannot detect the floors, we decided to make a flying companion to counter that problem. Since the FoV is very limiting, we made the companion a creature that makes sense when he is small. We also made use of the twelve principles of animation for our augmented character to enhance the experience.
From a production perspective: an interesting situation arose when we realized that although we are making a navigation companion, a lot of our playtesting and demo-days are done indoors. Therefore we had to develop a custom build for that experience. Keeping that in mind, really helped us refine the scope and develop the enhanced experience for the occasion. As mentioned, we also switched to a different tool to manage daily progress, but most people preferred the personal cork-board scrum to the digital manager. If we were doing this project again, we would try to find a way to incorporate some more of the features from the digital scrum into the physical scrum, so we could get a best of both worlds sync up.
Palimpsest is done developing the application. We hope that with the next generation of technology [BT-300 and so forth] that it will solve the simple hardware limitations that crippled us from completing our navigation companion [compass \ gps not functioning]. We are happy that so many people were excited to see this glimpse of the future now, and had a smile when they were able to interact with our companion. Epson had also commented that our project was their most exciting project to follow, as ours handled a use case that the everyday normal user would experience, whereas most attempts in this field are focused on highly specific enterprise cases. We hope that in the future people could have their favorite movie star, cartoon character, or friend accompany them through life as that could be a really powerful fantasy. Digital characters that we see on the screen shouldn’t be limited to that space and means of interaction, and we are very hopeful and excited for a future where this limitation is broken.
We all had an absolute blast in making this augmented companion prototype, and we are really excited to hear what future teams do with this technology especially as Moore’s Law kicks in.