Dev Blog Week 9
HTML5
After halves presentation, one of our top priority tasks to work on is the visuals of our game. After several discussions with our client, we finally decided to use the new HTML5 framework from Amazon to set up a web app that can use for our visuals. Currently, the web app is hosting on our S3 bucket with support from CloudFront. It communicates with our skill through “message” that allows us to show different images at each decision point. This gave us the ability to show a corresponding image for the scenario which can bring a more immersive experience to the players. In addition, we also tested the capacity of the web app by showing static images, Gif and video clips. We found out all the above are doable as long as the assets are stored on the S3 bucket. But the internet connections may slow down the speed of loading videos. This will decrease the smoothness of our experience. In fact, there’s already The Sims 4 that contains a lot of great visuals. We really don’t need to replicate it, and there’s no way to design a better plausible graphics on merely the Alexa device. Therefore, we decided to go with simply static images that can help our storytelling experience. The following is a short clip of our first visual prototype.
Alexa Lifecycle when using HTML
HTML skills can communicate with their skill endpoint to send local inputs, receive voice inputs, and use Alexa services like the skill store. To do this, a bidirectional socket like messaging scheme is used. The application can send a message to the skill at any time (e.g. if the screen is touched, or a timer elapses), and the skill can send one to the application at any time (e.g. should it receive an intent request). This mechanism is asynchronous and not guaranteed to be ordered. There’s one limitation that the HTML side could only receive two message input per second, but based on our project scope this isn’t a concern for now.