Skip to content

Using a neural network autoencoder to generate a "soundscape" based off song-object relationships

Notifications You must be signed in to change notification settings

yi-ye-zhi-qiu/SelfPiano

Repository files navigation

kitchen soundscapes

Autoencoder-generated, neural network music based off my musical opinion of objects in my kitchen

Metis logo Metis data-science bootcamp passion project, Mar 01 - Mar 25 2021

See the final product

See the blog post

Project was presented, slides!

Summary: Website view of a neural network's generation of music inspired by my own music about kitchen objects. The approach is to see music as an image, not a text. Input MIDI files are extracted for notes/chords, and an autoencoder is trained on that information.


Modules used:

  • music21
  • jupyter notebook
  • autoencoder
  • lstm
  • other modules: scikit-learn numpy

The data:

  • The dataset can be found in the "music" folder

The process:

  • The basic idea is to use load_songs.py (or get_songs as is in the jupyter notebooks) to retrive notes/chords
  • We then encode that using one-hot encoding (in the case of basic_autoencoder.py and lstm.py) or encode that into a sparse matrix as is in autoencoder_with_sparse_matrix.py
  • We train an autoencoder/lstm on that data and generate a note based off a random starting point
  • We can decode that back into notes and download it as a .mid file

On the web:

  • The project can be found here

About

Using a neural network autoencoder to generate a "soundscape" based off song-object relationships

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published