-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
_M_range_check exception encountered in ICudaEngine::createExecutionContext()
#3335
Comments
I guess something is broken in the sampleOnnxMNIST.cpp, may I ask what modification you made on the sampleOnnxMNIST.cpp? I'll try to reproduce later. Would be great if you can answer above questions :-) |
Thank you for looking into this. I attached the modified |
I can reproduce the issue, but I can't tell it's a bug because
|
I have the same issue. MaskRCNN based model which also converts fine, trtexec runs fine, but when loading in Triton TensorRT backend it fails with this error. Has 3 outputs as well. |
Thank you for looking into this. Your comment is exactly why I think this might be a bug, because |
i have simple problem when load engine from file when run SampleOnnxMNIST, which is mainly because mxxxDims is not settle well. Then i manually set those value. Here is my code
}` |
We have fixed the |
Closing this issue due to no response after 3 weeks as per our policy. If there is still an issue, please feel free to re-open or create a new issue @lhai37. |
About _M_range_check topic I also come across this problem
|
@ttyio How do we address this on Jetson which is still currently limited to TensorRT 8, even with JetPack 6? Upgrading to TensorRT 10 is not an acceptable solution in this scenario. Could we please re-open this bug to reflect that? |
I also come across this problem |
Description
Note: this issue is also posted to the NVIDIA TensorRT Forum
Loading an ONNX model (attached) via the C++ API, triggers the exception upon calling
ICudaEngine::createExecutionContext()
:This is also reproducible using the released
sampleOnnxMNIST
code. I am attaching both the ONNX file and the code file to reproduce it here.Interestingly,
trtexec --onnx=palm.onnx
can load the model just fine, so it seems that there’s a way to get this working via the C++ API, but I’m unable to pinpoint what it is.Environment
TensorRT Version: 8.6.1
NVIDIA GPU: NVIDIA GeForce RTX 3090
NVIDIA Driver Version: 535.54.03
CUDA Version: 12.0 (but also reproducible on 11.6)
CUDNN Version: 8.8
Operating System: Ubuntu 20.04
Python Version (if applicable): N/A
Tensorflow Version (if applicable): N/A
PyTorch Version (if applicable): N/A
Baremetal or Container (if so, version):
Relevant Files
Model link: https://forums.developer.nvidia.com/uploads/short-url/lOrlbR6P44UrBKYeCakZ7CZSoKs.onnx
Source code file to repro: https://forums.developer.nvidia.com/uploads/short-url/hMuSyiIpJO9Wz7KkdId6HZK1B1r.cpp
Steps To Reproduce
sampleOnnxMNIST.cpp
Commands or scripts:
Have you tried the latest release?: Yes
Can this model run on other frameworks? For example run ONNX model with ONNXRuntime (
polygraphy run <model.onnx> --onnxrt
): Yes it can be loaded withtrtexec
, but not via the C++ APIThe text was updated successfully, but these errors were encountered: