You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a question regarding parallelism using whisper-live vs. faster-whisper on a single GPU. In this faster-whisper issue the user asked if near-linear scaling would be possible to achieve on a single GPU and the answer was negative. I tried it myself and couldn't achieve linear scaling (e.g. if the transcription of a single file takes 15 seconds, running two files in two threads would take 30 seconds to complete).
In this issue here on whisper-live you claim that 4 parallel streams would be possible without much degradation on a single GPU and I tested it for three streams and it really doesn't show much degradation.
From what I understood from the faster-whisper implementation is that when transcribing a file, the model takes up all the GPU resources for the current chunk of the file to be transcribed. So, when running it in multiple threads basically the threads compete with each other and wait for the other one to complete transcribing the chunk from the respective audio file. I guess the same things happen here but the linear scaling is observable when using multiple threads.
I went through the code but couldn't really see anything I didn't already try myself using threading to achieve scaling with just faster-whisper without much success.
So, my question is, how was it possible to achieve this performance?
The text was updated successfully, but these errors were encountered:
Firstly, thank you for a great repository.
I have a question regarding parallelism using whisper-live vs. faster-whisper on a single GPU. In this faster-whisper issue the user asked if near-linear scaling would be possible to achieve on a single GPU and the answer was negative. I tried it myself and couldn't achieve linear scaling (e.g. if the transcription of a single file takes 15 seconds, running two files in two threads would take 30 seconds to complete).
In this issue here on whisper-live you claim that 4 parallel streams would be possible without much degradation on a single GPU and I tested it for three streams and it really doesn't show much degradation.
From what I understood from the faster-whisper implementation is that when transcribing a file, the model takes up all the GPU resources for the current chunk of the file to be transcribed. So, when running it in multiple threads basically the threads compete with each other and wait for the other one to complete transcribing the chunk from the respective audio file. I guess the same things happen here but the linear scaling is observable when using multiple threads.
I went through the code but couldn't really see anything I didn't already try myself using threading to achieve scaling with just faster-whisper without much success.
So, my question is, how was it possible to achieve this performance?
The text was updated successfully, but these errors were encountered: