-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
onnxruntime errors when running detectron2/create_onnx.py #3809
Comments
@spasserkongen What you can do is when you use the export_model.py to get your model.onnx, you can use onnx file with onnxruntime to confirm your accuracy and then if you are satisfied, convert the model.onnx to the converted.onnx using the create_onnx.py for your tensorrt engine generation. |
@RajUpadhyay I am not using onnxruntime to run the converted.onnx model. The errors from onnxruntime occurs when I am generating the converted.onnx model using create_onnx.py. hope it makes sense. |
Did you try running your model.onnx with onnxruntime to first check if your original onnx is correct? |
@RajUpadhyay thanks for the idea of testing this. |
@spasserkongen its a given since the onnx generated from create_onnx.py has some custom nodes like TRT_PLUGIN or NMN or somnething, you can check it in the code. Those custom nodes are not supported by the onnxruntime so you cannot test it with onnxruntime. Just convert your model to tensorrt using the trtexec and try to run it with the sample infer.py on this repo. If you ever feel like using deepstream, you can also refer to my sample here. |
Is you env matched with https://github.com/NVIDIA/TensorRT/blob/release/10.0/samples/python/detectron2/requirements.txt? |
Looks like your onnx version is pretty low. |
Wich version onnx can support IR version 10? onnx==16.0 onnxruntime==1.17.3 will not support IR version 10 :) crying |
When I run /sampels/python/detectron2/create_onnx.py I get a lot of onnxruntime errors. The script finishes and also create an exported onnx model. This model can succesfully later be converted to TRT, however I am afraid that the detection-performance might be lowered.
I am trying to convert a mask-rcnn model trained in detectron2.
I get the following errors:
[W] Inference failed. You may want to try enabling partitioning to see better results. Note: Error was:
[ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Failed to load model with error: /onnxruntime_src/onnxruntime/core/graph/model.cc:149 onnxruntime::Model::Model(onnx::ModelProto&&, const PathString&, const IOnnxRuntimeOpSchemaRegistryList*, const onnxruntime::logging::Logger&, const onnxruntime::ModelOptions&) Unsupported model IR version: 10, max supported IR version: 9
[W] Inference failed. You may want to try enabling partitioning to see better results. Note: Error was:
[ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Failed to load model with error: /onnxruntime_src/onnxruntime/core/graph/model.cc:149 onnxruntime::Model::Model(onnx::ModelProto&&, const PathString&, const IOnnxRuntimeOpSchemaRegistryList*, const onnxruntime::logging::Logger&, const onnxruntime::ModelOptions&) Unsupported model IR version: 10, max supported IR version: 9
[W] Inference failed. You may want to try enabling partitioning to see better results. Note: Error was:
[ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Failed to load model with error: /onnxruntime_src/onnxruntime/core/graph/model.cc:149 onnxruntime::Model::Model(onnx::ModelProto&&, const PathString&, const IOnnxRuntimeOpSchemaRegistryList*, const onnxruntime::logging::Logger&, const onnxruntime::ModelOptions&) Unsupported model IR version: 10, max supported IR version: 9
does anyone know, if these errors can be ignored? or if I need to change versions of one or more python packages to fix it?
Environment
docker: pytorch/pytorch:2.1.2-cuda11.8-cudnn8-devel
detectron2: 0.6
nvidia driver: 545.23.08
tensorRT: 8.6
The text was updated successfully, but these errors were encountered: