-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
gRPC Segfaults in Triton 24.05 due to Low Request Cancellation Timeout #7368
Comments
On quick study of the stack trace, it could be related to a lower context timeout as well (10 Milliseconds) which could be causing this. Seen above as |
I've confirmed that the issue was request cancellation. In production, we have varying timeouts per Inference Request. One particular set of requests had the timeout set in the range of [1-4] ms for end to end inference. This caused the segmentation fault and increasing it resolved the issue. I was also reading on this and it seems request cancellation feature is still under development and is only currently supported for gRPC Python as seen here and here. We use gRPC Golang on the contrary. @tanmayv25 this may be of interest to you as I see you have already worked on this part of the code here. cc: @Tabrizian @dyastremsky @rmccorm4 as well Let me know if you need any more details here. For now, we are averting this by creating a Goroutine with a timeout and not providing a timeout for the inference requests itself. Thanks |
Thanks @AshwinAmbal for digging into this and sharing results of your experimentation. So, if the timeout value is very small then you were running into this segfault? You are not sending request cancellation explicitly from the client right? Would you mind sharing your model execution time and rest of the latency breakdown? Can you update the title of this issue to reflect the current issue? |
Hi @tanmayv25,
Yes. Low context timeouts sent from client using gRPC causes the segfault. We were sending request cancellation by setting the context timeout like done here except that at times our context timeouts can range between 1 ms - 4 ms which causes the Segfault. Hence, to work around this issue we have created go routines which send the inference request to Triton with a high context timeout while the Goroutine itself has the timeout we expect for the request. In this case, if the timeout has been reached (1 ms - 4 ms), the main routine will return without waiting for the Goroutine to finish while the Goroutine itself will only complete after the Inference response is received (from Triton). For example, pseudo code below for main routine is as follows:
Please also note that we are using Triton with CPU only for inference at this point.
The Average Inference Request Duration for the model is
I believe the issue is Request Cancellation Timeout being low. I will update the title accordingly. Let me know if you need any more details. Thanks |
Description
We use gRPC to query Triton for Model Ready, Model Metadata and Model Inference Requests. When running the Triton server for a sustained period of time, we get Segfaults unexpectedly [Signal 11 received]. The trace of the segfault is attached with this issue but the time when it occurs cannot be predicted and happens across our servers at irregular intervals.
Triton Information
What version of Triton are you using?
Version 24.05
I also built my CPU only version with Debug symbols and reproduced the same issue as well
Are you using the Triton container or did you build it yourself?
I can reproduce the issue in Triton Container from NGC and in the custom build that I've done as well
To Reproduce
Steps to reproduce the behavior.
Description of node
c6i.16xlarge
instance from AWSError occurred
GDB Trace by building Debug Container as described here
Describe the models (framework, inputs, outputs), ideally include the model configuration file (if using an ensemble include the model configuration file for that as well).
Model Configuration seen here.
Model is trained with Tensorflow 2.13 and is a saved_model.pb artifact
Expected behavior
No Segfaults or server crashes
As we can see, the issue starts mainly with the grpc
InferHandlerState
and goes deeper into the Triton code which I am trying to study myself. I thought I will raise this issue here as it seems major and would like to get more eyes on this from the Triton community.Please let me know if you need any more information from my end as well.
Thanks
The text was updated successfully, but these errors were encountered: