It is a web service which automatically analyses the music you upload, and shows you an exciting, colorful 3D tour which visualizes the music as you listen. Sorry that we don't have a working online system for now, since the backend analysis requires intensive computing resources to work.
NOTE: This system was developed by a group of four within 24 hours at HackShanghai.
Use Harmonic-Percussive Source Separation(HPSS), and detect beats from percussive.
Arousal/valence are two commonly used metrics for emotion detection. We trained a model to predict music arousal/valence with various signal features and gradient boosting trees. Our model gave good performance on "Emotion in Music" public dataset. See "Presentation.pdf" for details.
A/V values of the uploaded music at each time are predicted at the backend, and sent back to the frontend together with the detected beats of the music. Frontend uses the analysis to render a fantastic 3D tour based on Light.js and Three.js, which will spark as the music beats, and change activities as the music gets more/less exciting.
- scikit-learn
- scikits.samplerate
- librosa
- Bregman
- Flask
- Light.js
- Highcharts.js
- Three.js
cd emotion-model
./run_music_analyze_server.py server-conf.py
Intensive computing is required to analyse an uploaded music. You can try with the songs in 'demo' directory, whose analysis results are cached inside this repo.