Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

textual_attack.py的token_gradient方法在return处出现one_hot.grad为None而报错 #1

Open
Thunderank opened this issue Apr 13, 2024 · 6 comments

Comments

@Thunderank
Copy link

QQ截图20240413232149
如图所示,经过loss.backward之后one_hot仍为None类型

@Windy4096
Copy link

hello, do u solve the problem?

@Thunderank
Copy link
Author

hello, do u solve the problem?

No,I'm still stuck in here.
I have tested all relevent variables and now suspect that something goes wrong with the "CosineSimilarityLoss" class

@yangyijune
Copy link
Collaborator

Hi! Have you resolved this issue? I haven't come across this problem myself. It's possible that there might be some gradient blocking codes in your transformers package's clip_modeling.py, such as @torch.no_grad(), which could be affecting the process.

@1998v7
Copy link

1998v7 commented May 26, 2024

Have you solved this problem?

@Thunderank
Copy link
Author

Thunderank commented May 27, 2024 via email

@tsrigo
Copy link

tsrigo commented Oct 20, 2024

我刚刚也遇到了这个问题,后面发现是添加属性没有添加完整,总共有四处位置
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants