Skip to content

export and augmentation for speaker verification #8132

Closed Answered by nithinraok
DiTo97 asked this question in Q&A
Discussion options

You must be logged in to vote

export

pre-processor export still being left out from the exported ONNX graph;

This is correct, that is the reason we use current nemo model to use the preprocessor, but you can just import the dataloader for that class and replicate the dataloading to avoid preloading from model.

ONNX export for the pre-processor is not available due to pytorch/pytorch#81075;
automatic model loading from a [config]

I am not currently working on ONNX export so my knowledge can be outdated, adding @borisfom to answer this query.

(https://github.com/NVIDIA/NeMo/blob/main/examples/speaker_tasks/recognition/conf/titanet-large.yaml) in inference is not available for exported models.

Yes, we currently onl…

Replies: 2 comments 5 replies

Comment options

You must be logged in to vote
2 replies
@titu1994
Comment options

@titu1994
Comment options

Comment options

You must be logged in to vote
3 replies
@DiTo97
Comment options

@DiTo97
Comment options

@nithinraok
Comment options

Answer selected by DiTo97
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
3 participants