🛡 torchattack - A set of adversarial attacks in PyTorch.
Install from GitHub source -
python -m pip install git+https://github.com/spencerwooo/torchattack
Install from Gitee mirror -
python -m pip install git+https://gitee.com/spencerwoo/torchattack
import torch
from torchattack import AttackModel, FGSM, MIFGSM
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# Load a model
model = AttackModel.from_pretrained(model_name='resnet50', device=device)
transform, normalize = model.transform, model.normalize
# Initialize an attack
attack = FGSM(model, normalize, device)
# Initialize an attack with extra params
attack = MIFGSM(model, normalize, device, eps=0.03, steps=10, decay=1.0)
Check out torchattack.eval.runner
for a quick example.
Gradient-based attacks:
Others:
Name | Publication | Paper (Open Access) | Class Name | |
---|---|---|---|---|
DeepFool | CVPR 2016 | DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks | DeepFool |
|
GeoDA |
|
CVPR 2020 | GeoDA: A Geometric Framework for Black-box Adversarial Attacks | GeoDA |
SSP | CVPR 2020 | A Self-supervised Approach for Adversarial Robustness | SSP |
# Create a virtual environment
python -m venv .venv
source .venv/bin/activate
# Install deps with dev extras
python -m pip install -r requirements.txt
python -m pip install -e ".[dev]"