Has anyone tried deploying whisper model on the NVIDIA Triton Inference Server? #6574
Kianit-bit
started this conversation in
General
Replies: 2 comments 4 replies
-
In case you're interested, there's a related discussion in OpenAI github: openai/whisper#1505 |
Beta Was this translation helpful? Give feedback.
0 replies
-
Above Has a example using python backend (there are better ways to do it as well). But only http endpoint works unfortunately. |
Beta Was this translation helpful? Give feedback.
4 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Can I run whisper on Triton?
Any link to example code will be very helpful.
Beta Was this translation helpful? Give feedback.
All reactions