You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have an onnx model i would like to convert to a trt engine to run some perf testing and see the differences in performance. For context, this is a DINO model generated by the MMDEPLOY packages and also a dependency on a shared object file. The onnx backend itself works as expected at inference time.
But for some reason while trying to convert the model using trtexec like so
04/01/2024-20:16:21] [W] [TRT] onnx2trt_utils.cpp:400: One or more weights outside the range of INT32 was clamped
[04/01/2024-20:16:21] [E] [TRT] ModelImporter.cpp:771: While parsing node number 1286 [Slice -> "onnx::Slice_934"]:
[04/01/2024-20:16:21] [E] [TRT] ModelImporter.cpp:772: --- Begin node ---
[04/01/2024-20:16:21] [E] [TRT] ModelImporter.cpp:773: input: "onnx::Slice_927"
input: "onnx::Slice_17570"
input: "onnx::Slice_931"
input: "onnx::Slice_17571"
input: "onnx::Slice_933"
output: "onnx::Slice_934"
name: "Slice_1286"
op_type: "Slice"
[04/01/2024-20:16:21] [E] [TRT] ModelImporter.cpp:774: --- End node ---
[04/01/2024-20:16:21] [E] [TRT] ModelImporter.cpp:777: ERROR: builtin_op_importers.cpp:4493 In function importSlice:
[8] Assertion failed: (axes.allValuesKnown()) && "This version of TensorRT does not support dynamic axes."
[04/01/2024-20:16:21] [E] Failed to parse onnx file
[04/01/2024-20:16:21] [I] Finished parsing network model. Parse time: 0.41992
[04/01/2024-20:16:21] [E] Parsing model failed
[04/01/2024-20:16:21] [E] Failed to create engine from model or file.
[04/01/2024-20:16:21] [E] Engine set up failed
I have tried using a static deploy config for the same DINO model config but that doesnt work wither. Any idea how to potentially fix this issue? I am running the trtexec commands on a 23.08 version release of the tensortRT container
Environment
TensorRT Version: 8.6.1
NVIDIA GPU: T4
NVIDIA Driver Version: 515
CUDA Version: 11.7
CUDNN Version:
Operating System:Ubuntu 20.7
Python Version (if applicable): 3.8
Tensorflow Version (if applicable): None
PyTorch Version (if applicable): None
Baremetal or Container (if so, version): TensorRT-23.08-py
Relevant Files
Model link:
For privacy reasons, cannot share the onnx file
Can this model run on other frameworks? For example run ONNX model with ONNXRuntime (polygraphy run <model.onnx> --onnxrt):
No polygraph throws the same error as well. Moreover folding constants on this file doesnt work either
The text was updated successfully, but these errors were encountered:
Check [Slice -> "onnx::Slice_934"] and see if the dynamic axes can be eliminated by constant folding or other ways.
@zerollzeng I did try folding the model using polygraph surgeon but that doesnt work either. Is there some other ways you can tell me about other than this approach?
Description
I have an onnx model i would like to convert to a trt engine to run some perf testing and see the differences in performance. For context, this is a DINO model generated by the MMDEPLOY packages and also a dependency on a shared object file. The onnx backend itself works as expected at inference time.
But for some reason while trying to convert the model using trtexec like so
I get the following error
I have tried using a static deploy config for the same DINO model config but that doesnt work wither. Any idea how to potentially fix this issue? I am running the trtexec commands on a 23.08 version release of the tensortRT container
Environment
TensorRT Version: 8.6.1
NVIDIA GPU: T4
NVIDIA Driver Version: 515
CUDA Version: 11.7
CUDNN Version:
Operating System:Ubuntu 20.7
Python Version (if applicable): 3.8
Tensorflow Version (if applicable): None
PyTorch Version (if applicable): None
Baremetal or Container (if so, version): TensorRT-23.08-py
Relevant Files
Model link:
For privacy reasons, cannot share the onnx file
Steps To Reproduce
Commands or scripts:
Have you tried the latest release?: yes, that doesnt work either
Can this model run on other frameworks? For example run ONNX model with ONNXRuntime (
polygraphy run <model.onnx> --onnxrt
):No polygraph throws the same error as well. Moreover folding constants on this file doesnt work either
The text was updated successfully, but these errors were encountered: