Week 5: Quarters!

Star Wars Battlefront Video Analyzing

Previously, we used static Star Wars related images and ran them through Google Vision API. We had some success with the “Web Entities” result, which is the collective information of the image and search terms. Google was able to identify iconic content such as the clone trooper or a wookie. The image content detection, however, was not as good. The API was not able to identify Star Wars specific content (Clone Trooper, Darth Vader, etc.). This is understandable as the API was trained using real images.

We tried to analyze Battlefront game videos by running screenshots taken from the videos through the Google Vision API. The results were not promising for both content identification and “Web Entities” since the images are not present on the internet. We discussed this obstacle in our client meeting and decided on trying an alternate way to obtain video tags instead. This time, we look to parse social media (Twitter, Reddit) for Battlefront related terms, hoping that the terms collected can be used to search and tag Battlefront videos.

Data Visualization Research

Meanwhile, we also read a few books on data visualization design (Visualization Analysis and Design by Tamara Munzer, Envisioning Information by Edward R. Tufte). The books educated us on several design principles and strategies we can use to effectively present a set of an information-heavy dataset. To facilitate learning, we shared individual learning points from reading different books with each other.

Quarters

This week, we had our Quarter Walkarounds! A handful of EA employees visited our space and tried out our prototype. Here is our Quarters video:

 

From the walkarounds, we received a lot of feedback and additional contacts that we could connect with. Next week, we will see if our Reddit/Twitter scraper is effective at collecting relevant Star Wars tags.

Week4: Prototype sharing and direction

On Tuesday, we showed our client our Tumblr prototype, with a simple front-end mockup. He found it interesting and wanted to see how effective the labeling is in a video context. Since the launch of Battlefront is coming up, he also wanted us to pivot to the Battlefront franchise.

Hence, we began to look at how effective Google Vision API is on Star Wars related images.

From the above image, we can see that the labeling is not very good since it cannot identify the meaning or meaningful objects in the picture. However, the Web Entities result showed many relevant results, such as Stormtrooper being identified. However, the web entities function is reliant on Google’s search results and user-related data. Hence, it is difficult to have those when the content is new (recent uploads).

We met with a marketing analyst and received huge insights on how the player base is separated and the tools they use for data collection.

We have also finalized our branding materials! Yay!

Logo

Poster

Half-sheet

 

For next week, we will focus on building the Youtube scraper and see if we can obtain good information from parsing Battlefront videos. We will also build an interactive front-end for our Quarter walkarounds so that users can interact with the prototype.

Week 3: Prototyping a Tumblr scraper

This week, we got busy with creating a simple Sims 4 Tumblr image scraper. The images are then parsed by Google’s Cloud Vision API and the tags of the images are obtained. Here’s a screenshot of how it looks like:

From this image, we can see that girl is the most associated tag among all the pictures we collected from Tumblr.

 

Besides the prototype, we worked on our branding materials:

Team Logos

Team Photo

Team Poster

What’s next:

Currently, we’re building a rough UI for the current prototype. Next, we’ll work on scraping data from forums and research on social media and behavior. We want to transform the information so that it’s useful to game developers in EA. How can we gain further insights from all this data? How does it reflect player sentiments on the game? We aim to find out.

First and second week! Client meeting and brainstorming

Hi everyone! We’re Team Tada currently at the SV campus, working with EA to create an internal data visualization tool for developers.

In the first week, we met with our client to find out his requirements and vision for the tool. To obtain further insights on the needs of our users, we also met with a community manager for the Sims franchise and an analyst working on the Battlefront and Titanfall titles. Based on their information, our team brainstormed several ideas with the criteria of innovation and practicality in mind.

Jiayang in our brainstorming session

Based on our brainstorming and what we learned from the people we met, we decided on three main ideas to pitch to our client in the second week:

  1. Social Hub. Categorizes and analyzes user-generated content from multiple social channels for a game.
  2. Cross-title Dashboard. Shows basic metrics for all titles across a timeline, including offline and online events.
  3. 3D data visualization. Visualize data using web VR or use AR/projection to view activity happening across a game map. (We had to put a crazy one in there)

After our mini pitch presentation, we decided to move forward with the social hub idea, augmented with the timeline functionality in from our cross-title dashboard idea. 3D visualization of data was too extreme, but the client appreciated our efforts on being creative.

Lessons learned

Given a project topic that has a wide definition with multiple areas of exploration, it is important to understand the users’ needs. For this, we need to have as many meetings as we can with people from different departments. Having the vision to target those requirements will narrow our scope and increase the effectiveness of our solution. We have to understand their needs and how our tool can help the development team address those exactly.

For the next week, we’ll focus on various branding materials and build a prototype for the product.