Besides speaker related tasks, speaker embeddings can be utilized for many related tasks which requires speaker modeling, such as
- voice conversion
- text-to-speech
- speaker adaptive ASR
- target speaker extraction
For users who would like to verify the SV performance or extract speaker embeddings for the above tasks without troubling about training the speaker embedding learner, we provide two types of pretrained models.
-
Checkpoint Model, with suffix .pt, the model trained and saved as checkpoint by WeNet python code, you can reproduce our published result with it, or you can use it as checkpoint to continue.
-
Runtime Model, with suffix .onnx, the
runtime model
is exported byOnnxruntime
on thecheckpoint model
.
The pretrained model in WeNet follows the license of it's corresponding dataset.
For example, the pretrained model on VoxCeleb follows Creative Commons Attribution 4.0 International License.
, since it is used as license of the VoxCeleb dataset, see https://mm.kaist.ac.kr/datasets/voxceleb/.
To use the pretrained model in pytorch
format, please directly refer to the run.sh
in corresponding recipe.
As for extracting speaker embeddings from the onnx
model, the following is a toy example.
# Download the pretrained model in onnx format and save it as onnx_path
# wav_path is the path to your wave file (16k)
python wespeaker/bin/infer_onnx.py --onnx_path $onnx_path --wav_path $wav_path
You can easily adapt infer_onnx.py
to your application, a speaker diarization example can be found in the voxconverse recipe
Datasets | Languages | Checkpoint (pt) | Runtime Model (onnx) |
---|---|---|---|
VoxCeleb | EN | ResNet34 / ResNet34_LM | ResNet34 / ResNet34_LM |
VoxCeleb | EN | ResNet152_LM | ResNet152_LM |
VoxCeleb | EN | ResNet221_LM | ResNet221_LM |
VoxCeleb | EN | ResNet293_LM | ResNet293_LM |
VoxCeleb | EN | CAM++ / CAM++_LM | CAM++ / CAM++_LM |
CNCeleb | CN | ResNet34 / ResNet34_LM | ResNet34 / ResNet34_LM |