Week 11 – Searchable videos and personal prognosis info

Home / Week 11 – Searchable videos and personal prognosis info

The searchable video library section is now well under way, and we have started on the implementing the personal prognosis information sections.

Searchable video library

The goal of this section is to allow patients who already know what they want to ask jump directly to the answer. We faced 2 main technical challenges here: how to make sense of the question they type in, and what to do with that information in order to return the most relevant videos.

For the first problem, if we want to get useful information in each question, there will ideally be some form of natural language processing (NLP). We looked at several libraries such as Stanford JavaNLP, Apache OpenNLP and Alexa Voice Service. In the end, we settled on using OpenNLP, mainly because it is easy to learn and has good documentation and resources.

we also did some research about what sort of information in the question would be useful, we decide what we wanted was simply to get different type words such as adverb (when, what, where), verb and noun. For example, if the question is “How might my life change with an LAVD?”, the words we can get should be “how”, “life”, “change”, and “LVAD”.

After successfully getting the information in each question, our next step was to add more descriptions for each video and then index them. For example, the video for question “What is an LVAD” and “How does LVAD work” might return the same video answer, even though their words are different (“what”, “LVAD” vs. “how”, “LVAD”, “work”). This means that we need to make sure the description of the video includes all those words.

Personal Prognosis Information

Last week, we were overwhelmed by the amount of data each patient had. The good news is, we have now combined a lot of them and narrowed them down into 3 factors: mortality rate, chance of bleeding, and chance of right ventricular failure. Those factors are active models with calculated risk outputs. This means that the prognosis is done is real time, and not stored in the database.

 

With that done, we have been communicating with clients’ tech team to figure out how to get that data. It is stored on the server, and we originally thought that we had to connect the database on it in order to pull the relevant information. However, today we were just told a great piece of news: we can simply use HTML POST and GET methods to login and get the data!

Once this part is implemented, we can start using the data to generate dynamic diagrams and infographics for the patients. We have started designing these graphics already, and should be done with them before Thanksgiving.


On that note, our UI design for the entire app is now more or less done too, pending approval from both our client and faculty advisors. Once we have their go ahead, we’ll start integrating it with the working code which we have. Our project is finally beginning to come together, and all of us are really excited about it!