Skip to content

[ASAP 2020; FPGA 2020] Hardware architecture to accelerate GNNs (common IP modules for minibatch training and full batch inference)

Notifications You must be signed in to change notification settings

GraphSAINT/GNN-ARCH

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 

Repository files navigation

IP Modules for GNN Hardware Accelerators

This repo implements the architecture that appears in the following papers:

  • Bingyi Zhang, Hanqing Zeng and Viktor Prasanna, Hardware Acceleration of Large Scale GCN Inference, The 31st IEEE International Conference on Application-specific Systems, Architectures and Processors. [PDF]
  • Zeng, Hanqing, and Viktor Prasanna. "Graphact: Accelerating GCN training on CPU-FPGA heterogeneous platforms." The 2020 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays. 2020 [PDF]

We will keep adding instructions here for running the hardware modules.

Software Platform

  • Quartus Prime Pro 20.2
  • Modelsim

Hardware Platform

  • Intel Stratix 10 GX

IP configuration

In feature aggregation module

  • acc: selecting Native Floating Point DSP intel Stratrix 10 FPGA IP. The detailed configurations are:

drawing

  • sourcebuffer: selecting RAM: 4-port intel FPGA IP. The detailed configurations are:

drawing

  • fifoindpr: selecting FIFO Intel FPGA Ip. The detailed configurations are:

drawing

In feature aggregation module

  • MAC: selecting Native Floating Point DSP intel Stratrix 10 FPGA IP. The detailed configurations are:

drawing

About

[ASAP 2020; FPGA 2020] Hardware architecture to accelerate GNNs (common IP modules for minibatch training and full batch inference)

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •