-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Text embeddings in distillation loss #1
Comments
Thanks so much for your comments! Toward your questions: “Shouldn't you also do the opposite comparison too?” “Also, is the method is "LwF", shouldn't the logits_current be between the current model embeddings of the ref_images and the ref_texts, instead of being between the current model embeddings of the ref_images and the ref model embeddings of the ref_texts?” “If I that isn't the case, there is no possibility of fine tuning the text encoder only. Why is this discarded for continuous CLIP?” Again, thanks so much for your constructive comments! Many ablation studies could be done in future to explore the continual learning of vision-language model! |
In the distillation loss of continual-CLIP:
https://github.com/Thunderbeee/ZSCL/blob/main/cil/continual_clip/models.py#LL260C4-L260C4
Shouldn't you also do the opposite comparison too? Compare the current model embeddings of the ref_text with the original model embeddings of the ref_images.
Also, is the method is "LwF", shouldn't the
logits_current
be between the current model embeddings of the ref_images and the ref_texts, instead of being between the current model embeddings of the ref_images and the ref model embeddings of the ref_texts?If I that isn't the case, there is no possibility of fine tuning the text encoder only. Why is this discarded for continuous CLIP?
Sorry if this questions are pretty basic.
The text was updated successfully, but these errors were encountered: