We made steady progress this week and we are more confident than before about our project. Specifically we achieved these things below –
1) We laid down our framework for our web platform on Amazon cloud servers. The entire pipeline now works. We successfully integrated SculptGL into our platform and now we can leverage the full features of SculptGL. Users can scan, upload, and modify the model on our web platform. Once users click the play button, the platform will play a piece of music for them. We are all amazed by our Nickel speed.
2) So far, the music we played on the web platform is programmatically generated based on random notation sequence. We already have a main melody played by piano and one track of accompaniment also by piano. And we love the music we generated! Once we finish the mesh analysis part, we use the real data to generate the music.
3) We finalized our idea about how to use z-axis information. We decided that we want to map z value to the richness of the music, i.e., we will add more music layers to the main melody (e.g., add more instruments).
4) We did a lot of visual art research, and we are still working on nailing down the art style. Our artist already has some pictures in her mind. And she is going to share with us next week.
5) Also we established our development environment and our weekly routine to maintain build. We use Git to do version control and use WebStorm as our IDE. On every Friday all programmers will be working on having a stable build so that we can show our latest progress to anyone who is interested.
Build of the Week
– http://54.201.139.142/obj2music/ There is not much to play around with. User can upload any object and modify the object using the tools already provided by SculptGL. However, now the music is not generated from the model. The music is generated from some random notation sequence we pre-defined. Also, we haven’t started to build the visual part since we are still researching the art style and visualization presentation.
Challenges
Though we made great progress this week, we still face several challenges/risks on next week – 1) designing a robust/fast algorithm to convert mesh data into recognizable data for our music input 2) how to best present our unique user experience – visual + acoustic 3) can we make the music more exciting and richer?
Schedule for Next Week
1) Mesh analysis 2) Visual presentation design and basic UI to interact with the model 3) Increasing the richness of the music – do experiments with our z-axis idea 4) Project name!
Pingback: Fitflop UK Beads