-
Notifications
You must be signed in to change notification settings - Fork 249
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
vosk-server-gpu Segmentation fault (core dumped) #224
Comments
Hi @nshmyrev do you have some advice? is something wrong in my setup? I tested with CPU, same model, and works without issue, with same implementation of code... but with GPU arrive to same issue in all the tests... thanks in advance! |
Do you close connection before closing results without sending eof? I need to reproduce this thing somehow. connection closed message worries me |
Hi @nshmyrev, I'm working with @cdgraff on this particular implementation, we are using a nodejs PassThrough to read the ffmpg output and feed it via websocket to the Vosk server. Our logs from node look something similar to this:
Let me know if this answers your question. Thanks! |
Hi! I'm having the same issues, but I'm using the python lib. From my debugging the problem occour when we start to close the websocket connection and the function FinishStream() of the BatchRecognizer get called. Here is the error:
Here the backtrace took from gdb:
If we delete the FinishStream() call the server works, but the used memory increase really fast without ever going down, I think is bacause the memory user by the recognizer never get released. I tried to implement the same think starting from the C++ server (but using the batch model and recognizer) and the same error occour. Can you help me solve this issue? Thanks! |
There is race condition in kaldi here: I'll try to fix coming days. |
Wonderful! |
Hi! Can help me to identify what i'm doing wrong that after some transcriptions i got an
Segmentation fault (core dumped)
I sent audio chunks of 30 seconds to transcribe, one after the other... in some cases we split into multiple workers as you can see bellow, using 3 workers.
The path for the server is created dynamically to be uniq by chunk.
The text was updated successfully, but these errors were encountered: