Skip to content
/ ProGAP Public

ProGAP: Progressive Graph Neural Networks with Differential Privacy Guarantees (WSDM 2024)

License

Notifications You must be signed in to change notification settings

sisaman/ProGAP

Repository files navigation

ProGAP: Progressive Graph Neural Networks with Differential Privacy Guarantees

This repository is the official implementation of the paper:
ProGAP: Progressive Graph Neural Networks with Differential Privacy Guarantees

Requirements

This code is implemented in Python 3.10 using PyTorch-Geometric 2.3.0 and PyTorch 2.0. Refer to requiresments.txt to see the full list of dependencies.

Notes

  1. We use Weights & Biases (WandB) to track the training progress and log experiment results. To replicate the results of the paper as described in the following, you need to have a WandB account. Otherwise, if you just want to train and evaluate the model, a WandB account is not required.

  2. We use Dask to parallelize running multiple experiments on high-performance computing clusters (e.g., SGE, SLURM, etc). If you don't have access to a cluster, you can also simply run the experiments sequentially on your machine (see usage section below).

  3. The code requires autodp version 0.2.1b or later. You can install the latest version directly from the GitHub repository using:

    pip install git+https://github.com/yuxiangw/autodp
    

Usage

Replicating the paper's results

To reproduce the paper's results, please follow the below steps:

  1. Set your WandB username in config/wandb.yaml (line 7). This is required to log the results to your WandB account.

  2. Execute the following python script:

    python experiments.py --generate
    

    This creates the file "jobs/experiments.sh" containing the commands to run all the experiments.

  3. If you want to run the experiments on your own machine, run:

    sh jobs/experiments.sh
    

    This trains all the models required for the experiments one by one. Otherwise, if you have access to a supported HPC cluster, first configure your cluster setting in config/dask.yaml according to Dask-Jobqueue's documentation. Then, run the following command:

    python experiments.py --run --scheduler <scheduler>
    

    where <scheduler> is the name of your scheduler (e.g., sge, slurm, etc). The above command will submit all the jobs to your cluster and run them in parallel.

  4. Use results.ipynb notebook to visualize the results as shown in the paper. Note that we used the Linux Libertine font in the figures, so you either need to have this font installed or change the font in the notebook.

Training individual models

Run the following command to see the list of available options for training individual models:

python train.py --help