-
Notifications
You must be signed in to change notification settings - Fork 83
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
a small model such as Mobilenet v2 for pre-training #25
Comments
Thank you and we agree that this could be of general interest and value. We will consider running SparK on mobilenet recently (perhaps v2 and v3), or you can try it out too. (see tutorial at https://github.com/keyu-tian/SparK/tree/main/pretrain#tutorial-for-pretraining-your-own-cnn-model). |
@keyu-tian Can I use swinv2-base as the backbone for pre-training? |
@xylcbd sorry but SparK is not suitable for this. Our SparK can pretrain any CNN model but swinv2 is a transformer. Maybe you can use MAE or SimMIM to pretrain swin transformer. |
Thank you for your excellent work. Replacing the transformer with CNN does make deployment more friendly. Furthermore, I'm wondering if using a smaller model such as Mobilenet v2 for pre-training and then fine-tuning downstream would be effective?
The text was updated successfully, but these errors were encountered: