An Analysis of Recent Advances in Deepfake Image Detection in an Evolving Threat Landscape (Official repo)
In this repository, we release code, datasets and model checkpoints for the paper --- "An Analysis of Recent Advances in Deepfake Image Detection in an Evolving Threat Landscape" accepted by IEEE S&P 2024.
To access the datasets and model checkpoints, please fill out the Google Form
git clone https://github.com/secml-lab-vt/EvolvingThreat-DeepfakeImageDetect.git
cd EvolvingThreat-DeepfakeImageDetect
conda env create --name env_name --file=env.yml
conda activate env_name
Installation, finetuning and inference instructions for the 8 defenses that we have studied are in defenses
folder. Please follow the README
file in defenses
.
We use the MM-BSN [CVPRW 2023] denoiser to get denoised images. Follow instructions in original repo for installation.
To denoise an image, run the following:
python test.py -c SIDD -g 0 --pretrained ./ckpt/SIDD_MMBSN_o_a45.pth --test_dir ./dataset/test_data --save_folder ./outputs/
pretrained
: path to pretrained denoiser modeltest_dir
: path to images to be denoisedsave_folder
: output path to save the denoised images
cd contentagnostic
python extractnoise.py --origpath <path to original images> --denpath <path to denoised images> --outputpath <path where image noise will be saved>
python traindct_w_noise.py --image_root <path to train images> --noise_image_root <path to noise of train images> --output_path <path to save trained model>
python testdct_w_noise.py --fake_root <path to test fake> --real_root <path to test real> --noise_fake_root <path to noise of test fake images> --noise_real_root <path to noise of test real images> --model_path <path to trained model> --path_to_mean_std <path to saved mean and std values during training>
cd surrogatemodels
For finetuning CLIP-ResNet:
python train_clipresnet.py --lr 1e-3 --epochs 30
We provide scripts in surrogatemodels
for EfficientNet and ViT finetuning, similar to CLIP-ResNet.
CLIP-ResNet inference:
python infer_clipresnet.py --model_path <path to finetuned model> --input_path <path to test data>
We provide scripts in surrogatemodels
for EfficientNet and ViT inference, similar to CLIP-ResNet.
cd adversarialattack/stylegan2-pytorch
Download the e4e encoder and place it inside adversarialattack/encoder4editing
folder.
Run our adversarial attack for the CLIP-ResNet surrogate classifier with the following command:
python adversarialattack_clipresnet.py --inputpath ./dataset/ --savepath ./outputs/ --plosscoeff 1.0 --classifiercoeff 0.1 --alpha 9.0 --beta 0.12 --lr 1e-3
inputpath
: path to input images for adversarial manipulationssavepath
: path to save adversarial imagesplosscoeff
: perceptual loss coefficient. We use 1.0 alwaysclassifiercoeff
: classifier loss coefficient. We use 0.1 for EfficientNet and ViT, and 0.02 for CLIP-ResNetlr
: we use learning rate of 1e-3
Provide finetuned surrogate classifier path accordingly in the script. We also provide similar scripts in adversarialattack/stylegan2-pytorch
for running adversarial attack with EfficientNet and ViT surrogate deepfake classifiers.
cd univconv2B
python train_univconv.py --epochs 30 --lr 1e-3
python infer_univconv.py --model_path <path to finetuned model> --input_path <path to test data>
pip install clean-fid
cd Metrics
python calcKID.py --dir1 <path to first directory of images> --dir2 <path to second directory of images>
Basically, provide paths to image directories you want to calculate KID for.
We followed the instructions from the original repo.
@inproceedings{abdullah2024analysis,
title={An Analysis of Recent Advances in Deepfake Image Detection in an Evolving Threat Landscape},
author={Abdullah, Sifat Muhammad and Cheruvu, Aravind and Kanchi, Shravya and Chung, Taejoong and Gao, Peng and Jadliwala, Murtuza and Viswanath, Bimal},
booktitle={Proc. of IEEE S\&P},
year={2024},
}