Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

onnxruntime errors when running detectron2/create_onnx.py #3809

Closed
spasserkongen opened this issue Apr 22, 2024 · 8 comments
Closed

onnxruntime errors when running detectron2/create_onnx.py #3809

spasserkongen opened this issue Apr 22, 2024 · 8 comments
Assignees
Labels
triaged Issue has been triaged by maintainers

Comments

@spasserkongen
Copy link

When I run /sampels/python/detectron2/create_onnx.py I get a lot of onnxruntime errors. The script finishes and also create an exported onnx model. This model can succesfully later be converted to TRT, however I am afraid that the detection-performance might be lowered.
I am trying to convert a mask-rcnn model trained in detectron2.

I get the following errors:
[W] Inference failed. You may want to try enabling partitioning to see better results. Note: Error was:
[ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Failed to load model with error: /onnxruntime_src/onnxruntime/core/graph/model.cc:149 onnxruntime::Model::Model(onnx::ModelProto&&, const PathString&, const IOnnxRuntimeOpSchemaRegistryList*, const onnxruntime::logging::Logger&, const onnxruntime::ModelOptions&) Unsupported model IR version: 10, max supported IR version: 9
[W] Inference failed. You may want to try enabling partitioning to see better results. Note: Error was:
[ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Failed to load model with error: /onnxruntime_src/onnxruntime/core/graph/model.cc:149 onnxruntime::Model::Model(onnx::ModelProto&&, const PathString&, const IOnnxRuntimeOpSchemaRegistryList*, const onnxruntime::logging::Logger&, const onnxruntime::ModelOptions&) Unsupported model IR version: 10, max supported IR version: 9
[W] Inference failed. You may want to try enabling partitioning to see better results. Note: Error was:
[ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Failed to load model with error: /onnxruntime_src/onnxruntime/core/graph/model.cc:149 onnxruntime::Model::Model(onnx::ModelProto&&, const PathString&, const IOnnxRuntimeOpSchemaRegistryList*, const onnxruntime::logging::Logger&, const onnxruntime::ModelOptions&) Unsupported model IR version: 10, max supported IR version: 9

does anyone know, if these errors can be ignored? or if I need to change versions of one or more python packages to fix it?

Environment

docker: pytorch/pytorch:2.1.2-cuda11.8-cudnn8-devel
detectron2: 0.6
nvidia driver: 545.23.08
tensorRT: 8.6

@RajUpadhyay
Copy link

RajUpadhyay commented Apr 22, 2024

@spasserkongen
If you are trying to run onnxruntime with the converted.onnx you get from the created_onnx.py, that won't work.
The create_onnx.py also adds custom plugins like TRT_Plugin in the onnx so that it can be converted to the TensorRT engine.
These custom plugins will not be recognized by the onnxruntime.

What you can do is when you use the export_model.py to get your model.onnx, you can use onnx file with onnxruntime to confirm your accuracy and then if you are satisfied, convert the model.onnx to the converted.onnx using the create_onnx.py for your tensorrt engine generation.

@spasserkongen
Copy link
Author

@RajUpadhyay I am not using onnxruntime to run the converted.onnx model. The errors from onnxruntime occurs when I am generating the converted.onnx model using create_onnx.py. hope it makes sense.

@RajUpadhyay
Copy link

Did you try running your model.onnx with onnxruntime to first check if your original onnx is correct?
The error "invalid argument" is was I am noticing. Can you also share how do you run this command?

@spasserkongen
Copy link
Author

@RajUpadhyay thanks for the idea of testing this.
When I use onnxruntime, I dont get any warning/errors when loading the original onnx model (the one exported from Detectron2). When I use onnxruntime to load the onnx model exported by create_onnx.py from this repo, I get the same errors as initially listed.

@RajUpadhyay
Copy link

RajUpadhyay commented Apr 23, 2024

@spasserkongen its a given since the onnx generated from create_onnx.py has some custom nodes like TRT_PLUGIN or NMN or somnething, you can check it in the code. Those custom nodes are not supported by the onnxruntime so you cannot test it with onnxruntime.
You can always visualize your onnx model by using netron though (python3 -m pip install netron; netron converted.onnx)

Just convert your model to tensorrt using the trtexec and try to run it with the sample infer.py on this repo.

If you ever feel like using deepstream, you can also refer to my sample here.

@zerollzeng
Copy link
Collaborator

@zerollzeng zerollzeng self-assigned this Apr 25, 2024
@zerollzeng zerollzeng added the triaged Issue has been triaged by maintainers label Apr 25, 2024
@zerollzeng
Copy link
Collaborator

[ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Failed to load model with error: /onnxruntime_src/onnxruntime/core/graph/model.cc:149 onnxruntime::Model::Model(onnx::ModelProto&&, const PathString&, const IOnnxRuntimeOpSchemaRegistryList*, const onnxruntime::logging::Logger&, const onnxruntime::ModelOptions&) Unsupported model IR version: 10, max supported IR version: 9

Looks like your onnx version is pretty low.

@bigmover
Copy link

bigmover commented May 10, 2024

[ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Failed to load model with error: /onnxruntime_src/onnxruntime/core/graph/model.cc:149 onnxruntime::Model::Model(onnx::ModelProto&&, const PathString&, const IOnnxRuntimeOpSchemaRegistryList*, const onnxruntime::logging::Logger&, const onnxruntime::ModelOptions&) Unsupported model IR version: 10, max supported IR version: 9

Looks like your onnx version is pretty low.

Wich version onnx can support IR version 10? onnx==16.0 onnxruntime==1.17.3 will not support IR version 10 :) crying

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
triaged Issue has been triaged by maintainers
Projects
None yet
Development

No branches or pull requests

4 participants