This example trains GAT model on OGBN-Products and OGBN-Papers100M on CPUs. It uses the optimizations in DGL as well as those in this extension for the MLP part of GNN training.
Install conda env and activate it as described in this README.
Install common GNN dependencies as described in this README.
To recompile the extension:
$make -C ../../.. reinstall
For FP32 training
To run baseline
$bash ./run.sh ogbn-products
To run optimized version
$bash ./run.sh ogbn-products --opt_mlp
For BF16 training (works only with optimized version)
$bash ./run.sh ogbn-products --opt_mlp --use_bf16
FP32 accuracy with optimized version on Intel® Xeon® Platinum 8380 server: 78.x % (SOTA)
For FP32 training
To run baseline
$bash ./run.sh ogbn-papers100M
To run optimized version
$bash ./run.sh ogbn-papers100M --opt_mlp
For BF16 training (works only with optimized version)
$bash ./run.sh ogbn-papers100M --opt_mlp --use_bf16
FP32 accuracy with optimized version on Intel® Xeon® Platinum 8380 server: 65.x % (SOTA)