-
Notifications
You must be signed in to change notification settings - Fork 192
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[GSoC] Updates for Quantized models for QDQ method #266
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to support qdq models first then we do the replacement. It also affects the tim-vx backend.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The quantization script also needs to be updated https://github.com/opencv/opencv_zoo/tree/main/tools/quantize
Thank you. I fixed it. |
OK does this mean that I have to first fix & merge OpenCV DNN first, then merge this PR? |
Hi @jet981217, please keep the original quantized model, and add the suffix of |
I think we will move on with qdq model in the future. So no need to keep two copies of quantized models. We can have a discussion on this topic in weekly meeting. |
|
||
Results of accuracy evaluation with [tools/eval](../../tools/eval). | ||
|
||
| Models | Easy AP | Medium AP | Hard AP | | ||
| ----------- | ------- | --------- | ------- | | ||
| YuNet | 0.8871 | 0.8710 | 0.7681 | | ||
| YuNet quant | 0.8838 | 0.8683 | 0.7676 | | ||
| YuNet quant | 0.8809 | 0.8626 | 0.7493 | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should test models using opencv dnn and give result numbers. That is why we need qdq support first for dnn.
This PR updates several models using the QDQ (Quantization-During-Quantization) method. Models that showed sufficient performance with Per Tensor quantization were updated accordingly. For models where Per Tensor quantization did not yield satisfactory results, Per Channel quantization was used. Models lacking calibration data, sensitive to quantization, or having other issues were not updated.
Updated Models:
This update aims to enhance model performance with efficient quantization methods where applicable. Models with significant issues were deferred for future updates.
Updates on OpenCV DNN is on the way for supporting QDQ.