-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tensorrt的支持 #12
Comments
|
Please update to TensorRT 8.6.x |
Thanks, it worked. |
实现了rtmdet的tensorrt加速,转换onnx模型时必须转换为静态模型,仅供参考 @Tau-J
|
Alternatively, you can enable TensorrtExecutionProvider on ONNXruntime to use TensorRT as inference backend. Note that you may have to perform shape inference on ONNX model first using symbolic_shape_infer.py to prepare your model. Also for TensorRT 8.2-8.4, build custom TensorRT Ops plug-in from MMdeploy (and load the plug-in to Tensorrt Execution Provider following usage) is also required. |
请问能否增加对tensorrt的支持?
The text was updated successfully, but these errors were encountered: