-
Notifications
You must be signed in to change notification settings - Fork 104
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for qdense_batchnorm in QKeras #74
base: master
Are you sure you want to change the base?
Conversation
Thanks for your pull request. It looks like this may be your first contribution to a Google open source project (if not, look below for help). Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). 📝 Please visit https://cla.developers.google.com/ to sign. Once you've signed (or fixed any issues), please reply here with What to do if you already signed the CLAIndividual signers
Corporate signers
ℹ️ Googlers: Go here for more info. |
1 similar comment
Thanks for your pull request. It looks like this may be your first contribution to a Google open source project (if not, look below for help). Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). 📝 Please visit https://cla.developers.google.com/ to sign. Once you've signed (or fixed any issues), please reply here with What to do if you already signed the CLAIndividual signers
Corporate signers
ℹ️ Googlers: Go here for more info. |
@julesmuhizi thank you so much for your PR, could you sign CLA first? |
@zhuangh I'm waiting on employer authorization for the CLA. Will do as soon as I get authorized. |
@googlebot I signed it! |
CLAs look good, thanks! ℹ️ Googlers: Go here for more info. |
@julesmuhizi thanks! could you also add a test for your code change. |
CLAs look good, thanks! ℹ️ Googlers: Go here for more info. |
1 similar comment
CLAs look good, thanks! ℹ️ Googlers: Go here for more info. |
@zhuangh here a test --> https://gist.github.com/nicologhielmetti/84df61987476b031eb8fc6103f7e2915 |
@julesmuhizi Don't you also need to add the new layer to @zhuangh Related to folding of dense+bn, |
@julesmuhizi Thank you for the commit. I reviewed it and it looked good. Not sure why your test generates different output values between folded and non-folded models. Can you write a test similar to bn_folding_test.py/test_same_training_and_prediction() where weights are set with values that quantization won't result in a loss of precision and make sure the two versions result in the same results? @vloncar There are quite a number of utility function to modify in order to support a new folded layer type. For example, convert_folded_model_to_normal, qtools, print_qmodel_summary, bn_folding_test, model_quantize, convert_folded_model_to_normal, etc. Regarding tests, I would suggest to write tests similar to qpooling_test.py (tests for regular new layers) and bn_folding_test.py (tests specific to bn folding type of layers) to check if all the utility functions are updated to support the new layer. |
courtesy ping @julesmuhizi In case you miss, there is a suggestion regarding test case from @lishanok thanks |
Hi, I have been occupied with another project but will review and begin addressing the issues in the comment above. Thanks for the ping @zhuangh |
Hi, |
Hi,
This thread is active and the layer is implemented on a separate
fork/branch that’s not been merged yet as I don’t know how to format the
unit tests.
https://github.com/julesmuhizi/qkeras/blob/qdense_batchnorm/qkeras/qdense_batchnorm.py
On Fri, May 13, 2022 at 11:58 AM Hao Zhuang ***@***.***> wrote:
Hi @boubasse <https://github.com/boubasse> thanks for the reminder.
@lishanok <https://github.com/lishanok> could you take a look?
—
Reply to this email directly, view it on GitHub
<#74 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ANDBM3O4NNYXUCFLP3YY6KLVJZ33PANCNFSM43VS2B6Q>
.
You are receiving this because you were assigned.Message ID:
***@***.***>
--
Jules Muhizi (he/him/his)
Harvard College | Class of 2022
Electrical Engineering
Secondary in Romance Languages and Literature (Spanish)
|
93690e1
to
b70c3be
Compare
@lishanok @zhuangh Sorry for the delay. We've added the requested test. An (unrelated) autoqkeras test was also failing (presumably also on master) due to the same legacy optimizer issue that was fixed in 5b1fe84. So we adopted a similar solution to that for the Adam optimizer. Let us know if you want us to split that into a separate PR. We think this is ready to be merged and a follow-up PR should handle updating the utility functions to support a new folded layer type ( |
Thanks, Javier Hi Shan and Daniele, could you take a look? @lishanok and @danielemoro |
Hi all, any chance you could take a look? Thanks! |
Sorry about the delay. We will take a look as soon as possible. -Shan
…On Thu, Feb 16, 2023 at 11:09 AM Javier Duarte ***@***.***> wrote:
Hi all, any chance you could take a look? Thanks!
@zhuangh <https://github.com/zhuangh> @lishanok
<https://github.com/lishanok> @danielemoro
<https://github.com/danielemoro>
—
Reply to this email directly, view it on GitHub
<#74 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ARFXNMM2T4PAEEHIBOOGSULWXZ3PTANCNFSM43VS2B6Q>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Hi, @zhuangh @lishanok @danielemoro any chance you could take a look? Thank you! |
Hi @zhuangh @lishanok @danielemoro are you able to merge this? |
hi @lishanok can you take a look at this? thanks! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since some time has passed and our main branch has gone through some changes, could you rebase your PR with master branch and see if it passes all tests? Thanks!
224797b
to
96f8b96
Compare
@lishanok thanks for looking! CI tests pass after rebase. |
Hi. My team used this feature, and it worked well for us. We'd love to see it merged. Are there any blockers? |
@lishanok can this be merged? It would be very useful for our team. Let me know if I can help with anything. |
Hi, @lishanok @zhuangh @danielemoro , can you please have a look and check if this can be merged? |
Add support for qdense_batchnorm by folding qdense kernel with batchnorm parameters, then computing qdense_batchnorm output using the qdense inputs and folded kernel