Skip to content

Code for "Learning Perceptually-Aligned Representations via Adversarial Robustness"

Notifications You must be signed in to change notification settings

MadryLab/robust_representations

Repository files navigation

Code for "Learning Perceptually-Aligned Representations via Adversarial Robustness"

These are notebooks for reproducing our paper "Learning Perceptually-Aligned Representations via Adversarial Robustness" (preprint, blog). Based on the robustness python library.

Running the notebooks

Steps to run the notebooks (for now, requires CUDA):

  • Clone this repository
  • Download our models from S3: CIFAR-10, Restricted ImageNet (standard training for comparison)
  • Make a models folder in the main repository folder, and save the checkpoints there
  • Install all the required packages with pip install -r requirements.txt
  • Edit user_constants.py to point to PyTorch-formatted versions of the CIFAR and ImageNet datasets
  • Start a jupyter notebook server: jupyter notebook . --ip 0.0.0.0

Citation

@inproceedings{engstrom2019learning,
    title={Learning Perceptually-Aligned Representations via Adversarial Robustness},
    author={Logan Engstrom and Andrew Ilyas and Shibani Santurkar and Dimitris Tsipras and Brandon Tran and Aleksander Madry},
    booktitle={ArXiv preprint arXiv:1906.00945},
    year={2019}
}

About

Code for "Learning Perceptually-Aligned Representations via Adversarial Robustness"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •