-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
q
parameter does not receive gradient even when trainable_q = True
in MagNet_link_prediction
#60
Comments
Thank you for your interest in our package. Do you have any idea how to fix that and perhaps could you open a pull request on this? |
Hi @SherylHYX, I think the issue is a (perhaps intended) mismatch in nomenclature between the paper, the documentation and the code. Essentially, the paper and the documentation define If this interpretation is correct (please check!), it looks to me like an issue about conventions. Therefore I'll leave it to you to decide whether to follow the paper or keep it like this (maybe in the latter case the documentation could be corrected to specify that Please let me know if I made any mistake :) . |
Thank you @ClaudMor for pointing this out. I have now fixed the documentation. Note that |
Hi @SherylHYX, There may be also another case where If you think it's correct, perhaps it should be mentioned in the documentation? Thanks |
Yes this is correct and cache requires a fixed q
…________________________________
From: Claudio Moroni ***@***.***>
Sent: Friday, August 9, 2024 11:34:34 PM
To: SherylHYX/pytorch_geometric_signed_directed ***@***.***>
Cc: Yixuan He ***@***.***>; Mention ***@***.***>
Subject: Re: [SherylHYX/pytorch_geometric_signed_directed] `q` parameter does not receive gradient even when `trainable_q = True` in `MagNet_link_prediction` (Issue #60)
Hi @SherylHYX<https://github.com/SherylHYX>,
There may be also another case where q is not trained even if trainable_q = True, that is when also cached = True. Infact, in that case, get_magnetic_Laplacian and __norm__ are never invoked and therefore q does not receive gradients.
If you think it's correct, perhaps it should be mentioned in the documentation?
Thanks
—
Reply to this email directly, view it on GitHub<#60 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AKKWOAC4NCMIZWDHPJKSAELZQTOQVAVCNFSM6AAAAABJPVSFNCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDENZYGIYTSMRZHA>.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
Describe the bug
The
q
parameter does not receive gradient even whentrainable_q = True
inMagNet_link_prediction
.To Reproduce
The following MWE (taken from here) performs a
.backward()
onMagNet_link_prediction
. The.grad
field ofq
is not updated, while the.grad
field ofweight
does.torchviz
confirms thatq
is not part of the backward graph:I believe the non-differentiable operation happens somewhere in __norm__ or get_magnetic_Laplacian, but I haven't been able to identify it exactly.
Expected behavior
model.Chebs[0].q.grad
should be properly updated.Desktop:
Thanks in advance for your help.
The text was updated successfully, but these errors were encountered: