Skip to content

A new adversarial purification method that uses the forward and reverse processes of diffusion models to remove adversarial perturbations.

License

Notifications You must be signed in to change notification settings

NVlabs/DiffPure

Repository files navigation

Diffusion Models for Adversarial Purification

Official PyTorch implementation of the ICML 2022 paper:
Diffusion Models for Adversarial Purification
Weili Nie, Brandon Guo, Yujia Huang, Chaowei Xiao, Arash Vahdat, Anima Anandkumar
https://diffpure.github.io

Abstract: Adversarial purification refers to a class of defense methods that remove adversarial perturbations using a generative model. These methods do not make assumptions on the form of attack and the classification model, and thus can defend pre-existing classifiers against unseen threats. However, their performance currently falls behind adversarial training methods. In this work, we propose DiffPure that uses diffusion models for adversarial purification: Given an adversarial example, we first diffuse it with a small amount of noise following a forward diffusion process, and then recover the clean image through a reverse generative process. To evaluate our method against strong adaptive attacks in an efficient and scalable way, we propose to use the adjoint method to compute full gradients of the reverse generative process. Extensive experiments on three image datasets including CIFAR-10, ImageNet and CelebA-HQ with three classifier architectures including ResNet, WideResNet and ViT demonstrate that our method achieves the state-of-the-art results, outperforming current adversarial training and adversarial purification methods, often by a large margin.

Requirements

  • 1-4 high-end NVIDIA GPUs with 32 GB of memory.
  • 64-bit Python 3.8.
  • CUDA=11.0 and docker must be installed first.
  • Installation of the required library dependencies with Docker:
    docker build -f diffpure.Dockerfile --tag=diffpure:0.0.1 .
    docker run -it -d --gpus 0 --name diffpure --shm-size 8G -v $(pwd):/workspace -p 5001:6006 diffpure:0.0.1
    docker exec -it diffpure bash

Data and pre-trained models

Before running our code on ImageNet and CelebA-HQ, you have to first download these two datasets. For example, you can follow the instructions to download CelebA-HQ. Note that we use the LMDB format for ImageNet, so you may need to convert the ImageNet dataset to LMDB. There is no need to download CIFAR-10 separately.

Note that you have to put all the datasets in the datasest directory.

For the pre-trained diffusion models, you need to first download them from the following links:

For the pre-trained classifiers, most of them do not need to be downloaded separately, except for

Note that you have to put all the pretrained models in the pretrained directory.

Run experiments on CIFAR-10

AutoAttack Linf

  • To get results of defending against AutoAttack Linf (the Rand version):
cd run_scripts/cifar10
bash run_cifar_rand_inf.sh [seed_id] [data_id]  # WideResNet-28-10
bash run_cifar_rand_inf_70-16-dp.sh [seed_id] [data_id]  # WideResNet-70-16
bash run_cifar_rand_inf_rn50.sh [seed_id] [data_id]  # ResNet-50
  • To get results of defending against AutoAttack Linf (the Standard version):
cd run_scripts/cifar10
bash run_cifar_stand_inf.sh [seed_id] [data_id]  # WideResNet-28-10
bash run_cifar_stand_inf_70-16-dp.sh [seed_id] [data_id]  # WideResNet-70-16
bash run_cifar_stand_inf_rn50.sh [seed_id] [data_id]  # ResNet-50

Note that [seed_id] is used for getting error bars, and [data_id] is used for sampling a fixed set of images.

To reproduce the numbers in the paper, we recommend using three seeds (e.g., 121..123) for [seed_id] and eight seeds (e.g., 0..7) for [data_id], and averaging all the results across [seed_id] and [data_id], accordingly. To measure the worse-case defense performance of our method, the reported robust accuracy is the minimum robust accuracy of these two versions: Rand and Standard.

AutoAttack L2

  • To get results of defending against AutoAttack L2 (the Rand version):
cd run_scripts/cifar10
bash run_cifar_rand_L2.sh [seed_id] [data_id]  # WideResNet-28-10
bash run_cifar_rand_L2_70-16-dp.sh [seed_id] [data_id]  # WideResNet-70-16
bash run_cifar_rand_L2_rn50.sh [seed_id] [data_id]  # ResNet-50
  • To get results of defending against AutoAttack L2 (the Standard version):
cd run_scripts/cifar10
bash run_cifar_stand_L2.sh [seed_id] [data_id]  # WideResNet-28-10
bash run_cifar_stand_L2_70-16-dp.sh [seed_id] [data_id]  # WideResNet-70-16
bash run_cifar_stand_L2_rn50.sh [seed_id] [data_id]  # ResNet-50

Note that [seed_id] is used for getting error bars, and [data_id] is used for sampling a fixed set of images.

To reproduce the numbers in the paper, we recommend using three seeds (e.g., 121..123) for [seed_id] and eight seeds (e.g., 0..7) for [data_id], and averaging all the results across [seed_id] and [data_id], accordingly. To measure the worse-case defense performance of our method, the reported robust accuracy is the minimum robust accuracy of these two versions: Rand and Standard.

StAdv

  • To get results of defending against StAdv:
cd run_scripts/cifar10
bash run_cifar_stadv_rn50.sh [seed_id] [data_id]  # ResNet-50

Note that [seed_id] is used for getting error bars, and [data_id] is used for sampling a fixed set of images.

To reproduce the numbers in the paper, we recommend using three seeds (e.g., 121..123) for [seed_id] and eight seeds (e.g., 0..7) for [data_id], and averaging all the results across [seed_id] and [data_id], accordingly.

BPDA+EOT

  • To get results of defending against BPDA+EOT:
cd run_scripts/cifar10
bash run_cifar_bpda_eot.sh [seed_id] [data_id]  # WideResNet-28-10

Note that [seed_id] is used for getting error bars, and [data_id] is used for sampling a fixed set of images.

To reproduce the numbers in the paper, we recommend using three seeds (e.g., 121..123) for [seed_id] and five seeds (e.g., 0..4) for [data_id], and averaging all the results across [seed_id] and [data_id], accordingly.

Run experiments on ImageNet

AutoAttack Linf

  • To get results of defending against AutoAttack Linf (the Rand version):
cd run_scripts/imagenet
bash run_in_rand_inf.sh [seed_id] [data_id]  # ResNet-50
bash run_in_rand_inf_50-2.sh [seed_id] [data_id]  # WideResNet-50-2
bash run_in_rand_inf_deits.sh [seed_id] [data_id]  # DeiT-S
  • To get results of defending against AutoAttack Linf (the Standard version):
cd run_scripts/imagenet
bash run_in_stand_inf.sh [seed_id] [data_id]  # ResNet-50
bash run_in_stand_inf_50-2.sh [seed_id] [data_id]  # WideResNet-50-2
bash run_in_stand_inf_deits.sh [seed_id] [data_id]  # DeiT-S

Note that [seed_id] is used for getting error bars, and [data_id] is used for sampling a fixed set of images.

To reproduce the numbers in the paper, we recommend using three seeds (e.g., 121..123) for [seed_id] and 32 seeds (e.g., 0..31) for [data_id], and averaging all the results across [seed_id] and [data_id], accordingly. To measure the worse-case defense performance of our method, the reported robust accuracy is the minimum robust accuracy of these two versions: Rand and Standard.

Run experiments on CelebA-HQ

BPDA+EOT

  • To get results of defending against BPDA+EOT:
cd run_scripts/celebahq
bash run_celebahq_bpda_glasses.sh [seed_id] [data_id]  # the glasses attribute
bash run_celebahq_bpda_smiling.sh [seed_id] [data_id]  # the smiling attribute

Note that [seed_id] is used for getting error bars, and [data_id] is used for sampling a fixed set of images.

To reproduce the numbers in the paper, we recommend using three seeds (e.g., 121..123) for [seed_id] and 64 seeds (e.g., 0..63) for [data_id], and averaging all the results across [seed_id] and [data_id], accordingly.

License

Please check the LICENSE file. This work may be used non-commercially, meaning for research or evaluation purposes only. For business inquiries, please contact researchinquiries@nvidia.com.

Citation

Please cite our paper, if you happen to use this codebase:

@inproceedings{nie2022DiffPure,
  title={Diffusion Models for Adversarial Purification},
  author={Nie, Weili and Guo, Brandon and Huang, Yujia and Xiao, Chaowei and Vahdat, Arash and Anandkumar, Anima},
  booktitle = {International Conference on Machine Learning (ICML)},
  year={2022}
}

About

A new adversarial purification method that uses the forward and reverse processes of diffusion models to remove adversarial perturbations.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published