Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

关于遮挡鲁棒性 About occlusion robustness #19

Open
ygtxr1997 opened this issue Nov 17, 2021 · 4 comments
Open

关于遮挡鲁棒性 About occlusion robustness #19

ygtxr1997 opened this issue Nov 17, 2021 · 4 comments

Comments

@ygtxr1997
Copy link

有些工作指出Transformer在图像分类上,就算扣掉很多像素,也能有很好的精度,远超CNN。
为什么你们的实验结果表明,Transformer在人脸识别任务上遮挡鲁棒性不如CNN呢?
可以解释一下吗?

@zhongyy
Copy link
Owner

zhongyy commented Nov 17, 2021

可以分享相关的论文吗?我学习一下。

@ygtxr1997
Copy link
Author

Intriguing Properties of Vision Transformers,之前看到的是这篇论文。
另外想问一下你们做遮挡人脸实验的时候,训练时没有加遮挡,只有测试的时候才会加遮挡对吧?

@zhongyy
Copy link
Owner

zhongyy commented Nov 17, 2021

Intriguing Properties of Vision Transformers,之前看到的是这篇论文。 另外想问一下你们做遮挡人脸实验的时候,训练时没有加遮挡,只有测试的时候才会加遮挡对吧?

嗯嗯,是的,只有测试时加遮挡。

@ygtxr1997
Copy link
Author

另外还想请教一下,
(1) 使用adamw优化器的时候是如何找到合适的学习率的?
(2) 我的做法是训练8000step, 然后看哪种学习率设置在LFW,CFP-FP测试集的准确率最高. 因为资源受限, 没办法全部训练完比较最后的准确率, 这种寻找学习率的方式合理吗?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants