KeySight helps people learn the piano by transcribing any piano audio of their choosing and visualizing it with a custom-made video editor.
If you have ever struggled to find a YouTube tutorial for a song you love, or want to visualize a great piano cover you have heard, then this is the application for you!
Python and JavaScript are the main languages used, but we use a variety of frameworks to make development and performance better.
- Next.js for server-side rendering capabilities and SEO benefits
- TailwindCSS to make styling faster
- Node.js and Express.js to build the REST API's that lie in some of the services
- FastAPI for quick API responses and help KeySight utilize Python's machine learning capabilities
- PostgreSQL for our primary data store
- Magenta's Piano Transcription Model to transcribe piano audio to MIDI output
First, you will need to pull the GitHub repo with the following command
# Clone the repository
git clone https://github.com/sebat2004/keysight.git
If you want to see how the app looks, take the following steps to start up the Next.js application.
# Change into the web directory
cd keysight/web
# Install dependencies
npm install
# Run the project
npm run dev
If you want to interact with the different services we have, follow the next few steps to start up each service. (There will me a Docker Compose file in the near future to make this much easier).
# Change into the services directory
cd keysight/services
Follow if you want to perform CRUD actions as a user
# Change into the user directory
cd users
# Install dependencies
npm install
# Create a .env file
touch .env
# Add a DATABASE_URL env variable to connect to a PostreSQL database
# Start the application
npm run start