Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Recommended way for EfficientDet-Lite Quantization #1208

Open
rohansaw opened this issue Feb 7, 2024 · 0 comments
Open

Recommended way for EfficientDet-Lite Quantization #1208

rohansaw opened this issue Feb 7, 2024 · 0 comments

Comments

@rohansaw
Copy link

rohansaw commented Feb 7, 2024

I would like to fine-tune a pretrained efficientdet-lite0 model on my dataset and then do post training quantization (full int8). What is the recommended way of doing this? I did not find any documentation on this in the repository, however since efficientdet-lite was designed to be resistant to quantization I wonder how to do this.

@rohansaw rohansaw changed the title Recommended way for Quantization Recommended way for EfficientDet-Lite Quantization Feb 7, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant