Fix
- Changed bias initialization to avoid NaN errors during training (d00c408)
Docs
- Added contributions list to README.md (5744087)
More info from @srigas regarding the fix:
Hello again, this is another fix for a common problem presented in the Issues of the mtad-gat-pytorch repository. The use_bias parameter of the GAT layers is set to True by default. However, the initialization of the biases is such, that the corresponding Tensors can sometimes fill up with nonsense (due to torch.empty), thus leading to NaN values during training. Setting the use_bias parameter to False solves the issue of NaNs coming up, but does not allow the user to include biases in their model's architecture. The proposed solution solves both problems, and is also the de-facto approach in other big projects (see the GATConv class from PyG, for example, where zeros(self.bias) is written in the reset_parameters function, indicating that an initialization of zero for the biases is acceptable).