Attention Coefficient Matrix conversion #404
-
In
When from spektral.layers.ops import add_self_loops_indices
self_looped_idx = add_self_loops_indices(a.indices)
tf.scatter_nd(self_looped_idx, att[..., 0, 0], (n_nodes, n_nodes)) When tf.scatter_nd(a.indices, att[..., 0, 0], (n_nodes, n_nodes)) I've confirmed them with CATConv code, I think this might be correct... |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Yes, this looks correct. |
Beta Was this translation helpful? Give feedback.
Yes, this looks correct.
The attention indices of the GAT layer are mapped 1:1 to the edges, and the self-loops are added at the end of the edge list.
Were you able to achieve what you wanted to do?