This is a reimplementaion of the unconditional waveform synthesizer in DIFFWAVE: A VERSATILE DIFFUSION MODEL FOR AUDIO SYNTHESIS.
-
To continue training the model, run
python distributed_train.py -c config.json
. -
To retrain the model, change the parameter
ckpt_iter
in the correspondingjson
file to-1
and use the above command. -
To generate audio, run
python inference.py -c config.json -n 16
to generate 16 utterances. -
Note, you may need to carefully adjust some parameters in the
json
file, such asdata_path
andbatch_size_per_gpu
.