-
Notifications
You must be signed in to change notification settings - Fork 180
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
how to visualize the Adjacency matrix? #106
Comments
Hey, the global skeleton graph of your trained model of each layer is saved in the model. So if you want to plot the different graphs for the subsets, you need to go to model.l1.gcn1.PA of your trained model, l1 is representing the first Layer and so on. Then just plot it with matplotlib. |
@MBuxel |
Hmm can't really help you there. I'm using the network with a different dataset and my model learns a new matrix, but also not as good as the results in the paper. |
Hello, the adaptive graph convolution in the paper is structurally similar to the attention structure in Transformer, because I have seen a formula given in other papers: h=Attention (Qi, Ki, Vi)+A+ Ψ, I ∈{1,..., S}. The code is the same as this paper. The transformer obtains global information between joint points. May I ask if this adaptive graph convolution also yields global information? If it is not global information, is it local information obtained? Is it about aggregating adjacent contact information? Is it like traditional GCN that aggregates information from adjacent points? Looking forward to your reply. Thank you.@HeiHeiCCC |
In your paper, you mentioned about visualization of skeleton graph for different layers of samples such as 3th, 5th, and 7th layers. Could you point me where that part in the code?
The text was updated successfully, but these errors were encountered: