You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've been testing on PyTorch and noticed an AP discrepancy (0.5011xx down to 0.4941xx) between the pth model and the engine, which was generated using trtexec ptq.onnx ptq.engine --int8 --fp16 . I'm hoping to use polygraphy to debug and find out which layer is causing the engine's accuracy to dip.
But I see the --trt-outputs mark all isn't working with my QDQ int8 model.
So I use polygraphy run ptq.onnx --trt --int8 --onnxrt --fail-fast --atol 1e-2 --rtol 1e-3 to find layers with misaligned accuracy step by step.But I get
[E] 10: [optimizer.cpp::computeCosts::3728] Error Code 10: Internal Error (Could not find any implementation for node img_view_transformer.depth_net.depth_conv.3.aspp3.atrous_conv.weight + /depth_net/depth_conv/depth_conv.3/aspp3/atrous_conv/_weight_quantizer/QuantizeLinear + /depth_net/depth_conv/depth_conv.3/aspp3/atrous_conv/Conv.)
when i try polygraphy run ptq.onnx --trt --fp16 --int8
[E] 10: [optimizer.cpp::computeCosts::3728] Error Code 10: Internal Error (Could not find any implementation for node img_view_transformer.depth_net.depth_conv.3.aspp3.atrous_conv.weight + /depth_net/depth_conv/depth_conv.3/aspp3/atrous_conv/_weight_quantizer/QuantizeLinear + /depth_net/depth_conv/depth_conv.3/aspp3/atrous_conv/Conv.)
Description
trtexec ptq.onnx ptq.engine --int8 --fp16
. I'm hoping to usepolygraphy
to debug and find out which layer is causing the engine's accuracy to dip.--trt-outputs mark all
isn't working with my QDQ int8 model.polygraphy run ptq.onnx --trt --int8 --onnxrt --fail-fast --atol 1e-2 --rtol 1e-3
to find layers with misaligned accuracy step by step.But I getpolygraphy run ptq.onnx --trt --fp16 --int8
Environment
TensorRT Version: 8.5.3.1
NVIDIA GPU: 3090
NVIDIA Driver Version: 515.48.07
CUDA Version: 11.6
CUDNN Version: 8.9.6.50
Operating System: Ubuntu18.04
Python Version: 3.8.19
ONNX Version: 1.13.0
PyTorch Version: 1.13.0
polygraphy Version: 0.49.9
Relevant Files
ptq.onnx link: 百度网盘
Steps To Reproduce
Commands or scripts:
polygraphy run ptq.onnx --trt --int8 --onnxrt --fail-fast --atol 1e-2 --rtol 1e-3
polygraphy run ptq.onnx --trt --fp16 --int8
Have you tried the latest release?: No
Can this model run on other frameworks? For example run ONNX model with ONNXRuntime (
polygraphy run <model.onnx> --onnxrt
): Yes,its workThe text was updated successfully, but these errors were encountered: