Week 9: Halves!

This week, we focused mainly on Halves presentation. Here’s the link to halves!

Right now, we’re working hard to put our prototype up on the web so that Pittsburgh faculty can see it.

We’re also pressing on with UI iterations as we want to finalize it by next week.

Week 8: Preparing for Halves

Halves is around the corner! Meanwhile, the team has been working hard to move forward with developing the prototype.

Current Prototype

 

This is the current prototype that we have, after database restructuring and stemming of collected labels. We are able to see videos posted for a label, and the frequency of the labels appearing Youtube. An important feedback we receive after showing this prototype is how users interact and customize the groupings themselves. Community Managers will want to pair player taxonomies with their internal one, as well as separate videos under their own metric. Hence, we’re currently working on a meta label feature that categorizes a group of terms together, eliminating duplicate terms with the same meaning in the game’s context. This feature allows users to create their own custom labels and categorize videos their own way.

UI

We have also been working hard on iterating and exploring possible UI options, drawing inspiration from many sources such as knowledge is beautiful,

Currently, we plan to have an overview page that displays aggregate data of the terms, followed by an in-depth page detailing videos posted, as seen in our prototype. Creating an elegant and informative UI to show aggregate data is challenging and thus, we’re constantly iterating and sharing it with our instructors and the client.

We’ll be preparing for Halves from today till Tuesday next week. See you!

Week 7: Rapid UI Iterations, Research and Development

Hi again! This week, we start gearing up for Halves, towards a common understanding of what our product can do and what the UI should be able to display.

We read several papers related to our current goal of aggregating videos and mapping to player defined labels. Here’s a summary of the papers and how they can be relevant:

1. Trend Detection in Social Data, Hendrickson et. al.

The paper discussed mathematical models of detecting trending hash-tags in Twitter posts. We can adopt some of the models discussed in this paper to detect trending terms for Star Wars Battlefront 2 (SWBF2) related tweets. However, given the limited amount of data available for that, the model might be inaccurate.

2. Learning to Hash-tag Videos with Tag2Vec, Singh et. al.

The author described how he was able to train a neural network to learn hash-tag embeddings in a vector space based on a set of training videos and hash-tags.  Then, using Improved Dense Trajectory (IDT) features, the author was able to analyze videos and map the resulting fisher vectors to the tag vectors. This allows us to analyze new videos and map them to an already established tag space. However, the lack of data makes it difficult for us to train the model.

3. Few Example Video Event Retrieval using Tag Propagation, Mazloom et. al.

Here, the neighbor-voting using concept vector features is used to predict possible events (tags) in new videos. Hence, no training data is needed to construct the model to predict the content of new videos. However, the reported accuracy of prediction is still generally low.

Overall, The papers gave us several features that we could implement in our product. More importantly, it gave us an idea of how videos can be mapped to terms/labels in general. Hence, in the next iteration of our product, we will move forward to building video-term mapping, categorization, and term-term relationship.

 

On the UI side, we created more mockups based on our client’s feedback and shared them:

Overall, we received important feedback on the good and bad parts of the three mockups. We also came across http://histography.io/, which will provide us with the main source of inspiration for the next iteration for our UI. The challenge of designing a UI that is interactive, informative and elegant is a huge one for this project.

Next week, we’ll focus on developing our product features and move towards finalizing our UI.

 

Week 6: Videos by tags

Apologies for the late post! Jiayang and I (Christopher) were away for a conference for the past week.

Last week, we ran the scraper through Battlefront 2’s open beta phase. Right now, we’re still collecting tags and trying to see if they’re meaningful. We also iterated on the interface display according to our client’s thoughts on our prototypes last week.

In this mockup, videos are grouped under map names. This allows for easier cross-comparison of videos between each map, as well as comparisons between each video within each category.

Drawing inspiration from Netflix and other sources, we have several prototypes on how videos can be creatively displayed as well, by using bubbles to form meaningful shapes.

Currently, we are also reading several relevant papers:

  1. Learning to Hash-tag Videos with Tag2Vec, Singh et. al.
  2. Few Example Video Event Retrieval, Mazloom et. al.
  3. Trend Detection in Social Data, Hendrickson et. al.

We’ll consolidate and share, with our client, our findings from these papers and the terms we obtain from the open beta phase this week. The discussion then should solidify our next step.