Source code for paper "Hypernetworks build Implicit Neural Representations of Sounds". (arxiv)
Setup conda environment:
conda env create -f environment.yml
Populate .env
file with settings from .env.example
, e.g.:
DATA_DIR=~/datasets
RESULTS_DIR=~/results
WANDB_ENTITY=hypersound
WANDB_PROJECT=hypersound
Make sure that pytorch-yard
is using the appropriate version (defined in train.py
). If not, then correct package version with something like:
pip install --force-reinstall pytorch-yard==2022.9.1
Default experiment:
python train.py
Custom settings:
python train.py cfg.learning_rate=0.01 cfg.pl.max_epochs=100
Isolated training of a target network on a single recording:
python train_inr.py