Star Wars Battlefront Video Analyzing

Previously, we used static Star Wars related images and ran them through Google Vision API. We had some success with the “Web Entities” result, which is the collective information of the image and search terms. Google was able to identify iconic content such as the clone trooper or a wookie. The image content detection, however, was not as good. The API was not able to identify Star Wars specific content (Clone Trooper, Darth Vader, etc.). This is understandable as the API was trained using real images.

We tried to analyze Battlefront game videos by running screenshots taken from the videos through the Google Vision API. The results were not promising for both content identification and “Web Entities” since the images are not present on the internet. We discussed this obstacle in our client meeting and decided on trying an alternate way to obtain video tags instead. This time, we look to parse social media (Twitter, Reddit) for Battlefront related terms, hoping that the terms collected can be used to search and tag Battlefront videos.

Data Visualization Research

Meanwhile, we also read a few books on data visualization design (Visualization Analysis and Design by Tamara Munzer, Envisioning Information by Edward R. Tufte). The books educated us on several design principles and strategies we can use to effectively present a set of an information-heavy dataset. To facilitate learning, we shared individual learning points from reading different books with each other.

Quarters

This week, we had our Quarter Walkarounds! A handful of EA employees visited our space and tried out our prototype. Here is our Quarters video:

 

From the walkarounds, we received a lot of feedback and additional contacts that we could connect with. Next week, we will see if our Reddit/Twitter scraper is effective at collecting relevant Star Wars tags.

Week 5: Quarters!

You May Also Like