Week 6 Newsletter

9/29 – 10/3

What We Did:

This week was filled with lots of great meetings and decisions.

 

on the go team

We started this week out by chatting with Ryan, CEO of OnTheGo Platforms. They make an awesome SDK called ARI which is able to detect gestures on virtually any device’s camera, and the demo that Shirley ran worked amazingly well. So we negotiated with them and now we get to use their awesome full feature set for the remainder of the project.
We have since integrated their sdk system into our Unity project build.

 

OLYMPUS DIGITAL CAMERAOne of our contacts in Epson is sending us a camera rig 3d printed specifically for the BT-200 which will allows for a video camera to fit naturally with the rig to record both the virtual feed as well as the real life environment,
ergo it is able to capture that overlay.

Some Faculty from Pittsburgh gave us some really great feedback which we hope to take actions on moving forward.
Brenda in particular offered her time to help us with the task of making the character come to life.

 

wit ai logoWe checked out Wit.AI for voice recognition -> text output -> context flow. We initially were going to use a different voice recognition AI, but it did not have the functionality that we want. Wit.AI works fantastically well, but does not have any
Unity support, so Mohit will have to spend a bit of time coding up a Unity plugin.

Companion

On Friday Rex showed six concept arts to Carl and there was a very clear winner. It was simple but also have enough geomtry to give off the illusion of sentient thinking and emotions. We are going to polish and iterate on the concept before moving into production. We also need to consider the interactions and range of emotions that we want the robot to feel
and ensure that we build those capabilities into him.

Lastly we were able to narrow down our scope a bit into something a bit more manageable. Instead of having the companion find you the place you want to go to, he will instead take you where you want to go. An entirely separate application could spend all of its time just focusing on assisting users decide what to spend their time doing and so forth.

Challenges:
Unfortunately the initial voice recognition api we were looking at, while great, did not have the functionality that we were looking for. But we were able to find this other one that does everything we want, Wit.ai.
However, since it has no Unity support, we will need to spend a bit of time coding that plugin.

Plan For Next Week:
We will continue working on the character model, mapping out gestures to functions, getting the plugin for voice detection for Unity built and exploring the interactions that the companion can be doing along the journey to enhance it.

Comments are closed.