Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test result using ur weight #9

Open
EricZhou0221 opened this issue Dec 6, 2023 · 2 comments
Open

Test result using ur weight #9

EricZhou0221 opened this issue Dec 6, 2023 · 2 comments

Comments

@EricZhou0221
Copy link

Hello, I am a first-year master's student from ShenZhen University, I have conducted testing using your provided weight parameters on seven basic datasets, but the results do not align with those stated in the paper. My approach involved calculating F1 and AUC values for each image (due to the binary nature of the model's output mask, AUC is calculated for a single point), followed by computing the mean. All the results I obtained were mean values.
I would like to inquire whether the weight parameters you provided are pre-trained or obtained through benchmark training.
微信图片_20231206214457

@Knightzjz
Copy link
Owner

Hi Zhou, sorry for the late reply. I am on my leave for a few days.
I checked the pre-trained weights on colab and found I might have uploaded an incorrect version due to my bad naming habit. I will further check the pre-train weights issue when I head back to the campus with my desktop aside. Thanks for pointing out this issue for me.
BTW, if you are following our research on IML, you may check this answer. According to my arrangement, the entire training code for our latest benchmark model IML-ViT is about to be released. I suggest you follow IML-ViT's structure since we recently confirmed that the ViT architecture is much more powerful than CNNs in IML or anti-AIGC tasks and we will totally open-source this model. Also, IML-ViT is now accepted by AAAI24.
Again, thanks for your careful test, I will be correcting the pre-training weights A.S.A.P.

@EricZhou0221
Copy link
Author

Hi Zhou, No worries about the delay, and thanks for getting back to me. I appreciate your thorough investigation into the pre-trained weights issue. It's completely understandable, and I look forward to the corrected version. I will continue to follow and learn from the relevant research in IML. I am about to read and study the paper and associated code you released about IML-ViT. Thank you for your guidance and CONGRATULATIONS on your paper being accepted by AAAI.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants