Inspired by this repo and ML Writing Month. Questions and discussions are most welcome!
Lil-log is the best blog I have ever read!
TNNLS 2019
Adversarial Examples: Attacks and Defenses for Deep LearningIEEE ACCESS 2018
Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey2019
Adversarial Attacks and Defenses in Images, Graphs and Text: A Review2019
A Study of Black Box Adversarial Attacks in Computer Vision2019
Adversarial Examples in Modern Machine Learning: A Review2020
Opportunities and Challenges in Deep Learning Adversarial Robustness: A SurveyTPAMI 2021
Knowledge Distillation and Student-Teacher Learning for Visual Intelligence: A Review and New Outlooks2019
Adversarial attack and defense in reinforcement learning-from AI security view2020
A Survey of Privacy Attacks in Machine Learning2020
Learning from Noisy Labels with Deep Neural Networks: A Survey2020
Optimization for Deep Learning: An Overview2020
Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive Review2020
Learning from Noisy Labels with Deep Neural Networks: A Survey2020
Adversarial Machine Learning in Image Classification: A Survey Towards the Defender's Perspective2020
Efficient Transformers: A Survey2019
A Survey of Black-Box Adversarial Attacks on Computer Vision Models2020
Backdoor Learning: A Survey2020
Transformers in Vision: A Survey2020
A Survey on Neural Network Interpretability2020
A Survey of Privacy Attacks in Machine Learning2020
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses2021
Recent Advances in Adversarial Training for Adversarial Robustness (Our work, accepted by IJCAI 2021)2021
Explainable Artificial Intelligence Approaches: A Survey2021
A Survey on Understanding, Visualizations, and Explanation of Deep Neural Networks2020
A survey on Semi-, Self- and Unsupervised Learning for Image Classification2021
Model Complexity of Deep Learning: A Survey2021
Deep Generative Modelling: A Comparative Review of VAEs, GANs, Normalizing Flows, Energy-Based and Autoregressive Models2021
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses2019
Advances and Open Problems in Federated Learning2021
Countering Malicious DeepFakes: Survey, Battleground, and Horizon
ICLR
Intriguing properties of neural networksARXIV
[Identifying and attacking the saddle point problem in high-dimensional non-convex optimization]
EuroS&P
The limitations of deep learning in adversarial settingsCVPR
DeepfoolSP
C&W Towards evaluating the robustness of neural networksArxiv
Transferability in machine learning: from phenomena to black-box attacks using adversarial samplesNIPS
[Adversarial Images for Variational Autoencoders]ARXIV
[A boundary tilting persepective on the phenomenon of adversarial examples]ARXIV
[Adversarial examples in the physical world]
ICLR
Delving into Transferable Adversarial Examples and Black-box AttacksCVPR
Universal Adversarial PerturbationsICCV
Adversarial Examples for Semantic Segmentation and Object DetectionARXIV
Adversarial Examples that Fool DetectorsCVPR
A-Fast-RCNN: Hard Positive Generation via Adversary for Object DetectionICCV
Adversarial Examples Detection in Deep Networks with Convolutional Filter StatisticsAIS
[Adversarial examples are not easily detected: Bypassing ten detection methods]ICCV
UNIVERSAL
[Universal Adversarial Perturbations Against Semantic Image Segmentation]ICLR
[Adversarial Machine Learning at Scale]ARXIV
[The space of transferable adversarial examples]ARXIV
[Adversarial attacks on neural network policies]
ICLR
Generating Natural Adversarial ExamplesNeurlPS
Constructing Unrestricted Adversarial Examples with Generative ModelsIJCAI
Generating Adversarial Examples with Adversarial NetworksCVPR
Generative Adversarial PerturbationsAAAI
Learning to Attack: Adversarial transformation networksS&P
Learning Universal Adversarial Perturbations with Generative ModelsCVPR
Robust physical-world attacks on deep learning visual classificationICLR
Spatially Transformed Adversarial ExamplesCVPR
Boosting Adversarial Attacks With MomentumICML
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples πCVPR
UNIVERSAL
[Art of Singular Vectors and Universal Adversarial Perturbations]ARXIV
[Adversarial Spheres]ECCV
[Characterizing adversarial examples based on spatial consistency information for semantic segmentation]ARXIV
[Generating natural language adversarial examples]SP
[Audio adversarial examples: Targeted attacks on speech-to-text]ARXIV
[Adversarial attack on graph structured data]ARXIV
[Maximal Jacobian-based Saliency Map Attack (Variants of JAMA)]SP
[Exploiting Unintended Feature Leakage in Collaborative Learning]
CVPR
Feature Space Perturbations Yield More Transferable Adversarial ExamplesICLR
The Limitations of Adversarial Training and the Blind-Spot AttackICLR
Are adversarial examples inevitable? πIEEE TEC
One pixel attack for fooling deep neural networksARXIV
Generalizable Adversarial Attacks Using Generative ModelsICML
NATTACK: Learning the Distributions of Adversarial Examples for an Improved Black-Box Attack on Deep Neural NetworksπARXIV
SemanticAdv: Generating Adversarial Examples via Attribute-conditional Image EditingCVPR
Rob-GAN: Generator, Discriminator, and Adversarial AttackerARXIV
Cycle-Consistent Adversarial {GAN:} the integration of adversarial attack and defenseARXIV
Generating Realistic Unrestricted Adversarial Inputs using Dual-Objective {GAN} Training πICCV
Sparse and Imperceivable Adversarial AttacksπARXIV
Perturbations are not Enough: Generating Adversarial Examples with Spatial DistortionsARXIV
Joint Adversarial Training: Incorporating both Spatial and Pixel AttacksIJCAI
Transferable Adversarial Attacks for Image and Video Object DetectionTPAMI
Generalizable Data-Free Objective for Crafting Universal Adversarial PerturbationsCVPR
Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and DefensesCVPR
[FDA: Feature Disruptive Attack]ARXIV
[SmoothFool: An Efficient Framework for Computing Smooth Adversarial Perturbations]CVPR
[SparseFool: a few pixels make a big difference]ICLR
[Adversarial Attacks on Graph Neural Networks via Meta Learning]NeurIPS
[Deep Leakage from Gradients]CCS
[Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning]ICCV
[Universal Perturbation Attack Against Image Retrieval]ICCV
[Enhancing Adversarial Example Transferability with an Intermediate Level Attack]CVPR
[Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks]ICLR
[ADef: an Iterative Algorithm to Construct Adversarial Deformations]Neurips
[iDLG: Improved deep leakage from gradients.]ARXIV
[Reversible Adversarial Attack based on Reversible Image Transformation]CCS
[Seeing isnβt Believing: Towards More Robust Adversarial Attack Against Real World Object Detectors]NeurIPS
[Learning to Confuse: Generating Training Time Adversarial Data with Auto-Encoder]
ICLR
Fooling Detection Alone is Not Enough: Adversarial Attack against Multiple Object TrackingπARXIV
[Sponge Examples: Energy-Latency Attacks on Neural Networks]ICML
[Minimally distorted Adversarial Examples with a Fast Adaptive Boundary Attack]ICML
[Stronger and Faster Wasserstein Adversarial Attacks]CVPR
[QEBA: Query-Efο¬cient Boundary-Based Blackbox Attack]ECCV
[New Threats Against Object Detector with Non-local Block]ARXIV
[Towards Imperceptible Universal Attacks on Texture Recognition]ECCV
[Frequency-Tuned Universal Adversarial Attacks]AAAI
[Learning Transferable Adversarial Examples via Ghost Networks]ECCV
[SPARK: Spatial-aware Online Incremental Attack Against Visual Tracking]Neurips
[Inverting Gradients - How easy is it to break privacy in federated learning?]ICLR
[Nesterov Accelerated Gradient and Scale Invariance for Adversarial Attacks]NeurIPS
[On Adaptive Attacks to Adversarial Example Defenses]AAAI
[Beyond Digital Domain: Fooling Deep Learning Based Recognition System in Physical World]ARXIV
[Adversarial Color Enhancement: Generating Unrestricted Adversarial Images by Optimizing a Color Filter]CVPR
[Adversarial Camouflage: Hiding Physical-World Attacks With Natural Styles]CVPR
[Universal Physical Camouflage Attacks on Object Detectors] codeARXIV
[Understanding Object Detection Through An Adversarial Lens]CIKM
[Can Adversarial Weight Perturbations Inject Neural Backdoors?]ICCV
[Semantic Adversarial Attacks: Parametric Transformations That Fool Deep Classifiers]
ARXIV
[On Generating Transferable Targeted Perturbations]CVPR
[See through Gradients: Image Batch Recovery via GradInversion] πARXIV
[Admix: Enhancing the Transferability of Adversarial Attacks]ARXIV
[Deep Image Destruction: A Comprehensive Study on Vulnerability of Deep Image-to-Image Models against Adversarial Attacks]ARXIV
[Poisoning the Unlabeled Dataset of Semi-Supervised Learning] CarliniARXIV
[AdvHaze: Adversarial Haze Attack]CVPR
LAFEAT : Piercing Through Adversarial Defenses with Latent FeaturesARXIV
[IMPERCEPTIBLE ADVERSARIAL EXAMPLES FOR FAKE IMAGE DETECTION]ICME
[TRANSFERABLE ADVERSARIAL EXAMPLES FOR ANCHOR FREE OBJECT DETECTION]ICLR
[Unlearnable Examples: Making Personal Data Unexploitable]ICMLW
[Detecting Adversarial Examples Is (Nearly) As Hard As Classifying Them]ARXIV
[Mischief: A Simple Black-Box Attack Against Transformer Architectures]ECCV
[Patch-wise Attack for Fooling Deep Neural Network]ICCV
[Naturalistic Physical Adversarial Patch for Object Detectors]CVPR
[Natural Adversarial Examples]ICLR
[WaNet - Imperceptible Warping-based Backdoor Attack]
ICLR
[ON IMPROVING ADVERSARIAL TRANSFERABILITY OF VISION TRANSFORMERS]TIFS
[Decision-based Adversarial Attack with Frequency Mixup]
- [Learning with a strong adversary]
- [IMPROVING BACK-PROPAGATION BY ADDING AN ADVERSARIAL GRADIENT]
- [Distributional Smoothing with Virtual Adversarial Training]
ARXIV
Countering Adversarial Images using Input TransformationsICCV
[SafetyNet: Detecting and Rejecting Adversarial Examples Robustly]Arxiv
Detecting adversarial samples from artifactsICLR
On Detecting Adversarial Perturbations πASIA CCS
[Practical black-box attacks against machine learning]ARXIV
[The space of transferable adversarial examples]ICCV
[Adversarial Examples for Semantic Segmentation and Object Detection]
ICLR
Defense-{GAN}: Protecting Classifiers Against Adversarial Attacks Using Generative Models- .
ICLR
Ensemble Adversarial Training: Attacks and Defences CVPR
Defense Against Universal Adversarial PerturbationsCVPR
Deflecting Adversarial Attacks With Pixel DeflectionTPAMI
Virtual adversarial training: a regularization method for supervised and semi-supervised learning πARXIV
Adversarial Logit PairingCVPR
Defense Against Adversarial Attacks Using High-Level Representation Guided DenoiserARXIV
Evaluating and understanding the robustness of adversarial logit pairingCCS
Machine Learning with Membership Privacy Using Adversarial RegularizationARXIV
[On the robustness of the cvpr 2018 white-box adversarial example defenses]ICLR
[Thermometer Encoding: One Hot Way To Resist Adversarial Examples]IJCAI
[Curriculum Adversarial Training]ICLR
[Countering Adversarial Images using Input Transformations]CVPR
[Defense Against Adversarial Attacks Using High-Level Representation Guided Denoiser]ICLR
[Towards Deep Learning Models Resistant to Adversarial Attacks]AAAI
[Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing Their Input Gradients]NIPS
[Adversarially robust generalization requires more data]ARXIV
[Is robustness the cost of accuracy? - {A} comprehensive study on the robustness of 18 deep image classification models.]ARXIV
[Robustness may be at odds with accuracy]ICLR
[PIXELDEFEND: LEVERAGING GENERATIVE MODELS TO UNDERSTAND AND DEFEND AGAINST ADVERSARIAL EXAMPLES]
NIPS
Adversarial Training and Robustness for Multiple PerturbationsNIPS
Adversarial Robustness through Local LinearizationCVPR
Retrieval-Augmented Convolutional Neural Networks against Adversarial ExamplesCVPR
Feature Denoising for Improving Adversarial RobustnessNEURIPS
A New Defense Against Adversarial Images: Turning a Weakness into a StrengthICML
Interpreting Adversarially Trained Convolutional Neural NetworksICLR
Robustness May Be at Odds with AccuracyπIJCAI
Improving the Robustness of Deep Neural Networks via Adversarial Training with Triplet LossICML
Adversarial Examples Are a Natural Consequence of Test Error in NoiseπICML
On the Connection Between Adversarial Robustness and Saliency Map InterpretabilityNeurIPS
Metric Learning for Adversarial RobustnessARXIV
Defending Adversarial Attacks by Correcting logitsICCV
Adversarial Learning With Margin-Based Triplet Embedding RegularizationICCV
CIIDefence: Defeating Adversarial Attacks by Fusing Class-Specific Image Inpainting and Image DenoisingNIPS
Adversarial Examples Are Not Bugs, They Are FeaturesICML
Using Pre-Training Can Improve Model Robustness and UncertaintyNIPS
Defense Against Adversarial Attacks Using Feature Scattering-based Adversarial TrainingπICCV
Improving Adversarial Robustness via Guided Complement EntropyNIPS
Robust Attribution Regularization πNIPS
Are Labels Required for Improving Adversarial Robustness?ICLR
Theoretically Principled Trade-off between Robustness and AccuracyCVPR
[Adversarial defense by stratified convolutional sparse coding]ICML
[On the Convergence and Robustness of Adversarial Training]CVPR
[Robustness via Curvature Regularization, and Vice Versa]CVPR
[ComDefend: An Efficient Image Compression Model to Defend Adversarial Examples]ICML
[Improving Adversarial Robustness via Promoting Ensemble Diversity]ICML
[Towards the first adversarially robust neural network model on {MNIST}]NIPS
[Unlabeled Data Improves Adversarial Robustness]ICCV
[Evaluating Robustness of Deep Image Super-Resolution Against Adversarial Attacks]ICML
[Using Pre-Training Can Improve Model Robustness and Uncertainty]ARXIV
[Improving adversarial robustness of ensembles with diversity training]ICML
[Adversarial Robustness Against the Union of Multiple Perturbation Models]CVPR
[Robustness via Curvature Regularization, and Vice Versa]NIPS
[Robustness to Adversarial Perturbations in Learning from Incomplete Data]ICML
[Improving Adversarial Robustness via Promoting Ensemble Diversity]NIPS
[Adversarial Robustness through Local Linearization]ARXIV
[Adversarial training can hurt generalization]NIPS
[Adversarial training for free!]ICLR
[Improving the generalization of adversarial training with domain adaptation]CVPR
[Disentangling Adversarial Robustness and Generalization]NIPS
[Adversarial Training and Robustness for Multiple Perturbations]ICCV
[Bilateral Adversarial Training: Towards Fast Training of More Robust Models Against Adversarial Attacks]ICML
[On the Convergence and Robustness of Adversarial Training]ICML
[Rademacher Complexity for Adversarially Robust Generalization]ARXIV
[Adversarially Robust Generalization Just Requires More Unlabeled Data]ARXIV
[You only propagate once: Accelerating adversarial training via maximal principle]NIPS
Cross-Domain Transferability of Adversarial PerturbationsARXIV
[Adversarial Robustness as a Prior for Learned Representations]ICLR
[Structured Adversarial Attack: Towards General Implementation and Better Interpretability]ICLR
[Defensive Quantization: When Efficiency Meets Robustness]NeurIPS
[A New Defense Against Adversarial Images: Turning a Weakness into a Strength]
ICLR
Jacobian Adversarially Regularized Networks for RobustnessCVPR
What it Thinks is Important is Important: Robustness Transfers through Input GradientsICLR
Adversarially Robust Representations with Smooth Encoders πARXIV
Heat and Blur: An Effective and Fast Defense Against Adversarial ExamplesICML
Triple Wins: Boosting Accuracy, Robustness and Efficiency Together by Enabling Input-Adaptive InferenceCVPR
Wavelet Integrated CNNs for Noise-Robust Image ClassificationARXIV
Deflecting Adversarial AttacksICLR
Robust Local Features for Improving the Generalization of Adversarial TrainingICLR
Enhancing Transformation-Based Defenses Against Adversarial Attacks with a Distribution ClassifierCVPR
A Self-supervised Approach for Adversarial RobustnessICLR
Improving Adversarial Robustness Requires Revisiting Misclassified Examples πARXIV
Manifold regularization for adversarial robustnessNeurIPS
DVERGE: Diversifying Vulnerabilities for Enhanced Robust Generation of EnsemblesARXIV
A Closer Look at Accuracy vs. RobustnessNeurIPS
Energy-based Out-of-distribution DetectionARXIV
Out-of-Distribution Generalization via Risk Extrapolation (REx)CVPR
Adversarial Examples Improve Image RecognitionICML
[Confidence-Calibrated Adversarial Training: Generalizing to Unseen Attacks] πICML
[Efficiently Learning Adversarially Robust Halfspaces with Noise]ICML
[Implicit Euler Skip Connections: Enhancing Adversarial Robustness via Numerical Stability]ICML
[Friendly Adversarial Training: Attacks Which Do Not Kill Training Make Adversarial Learning Stronger]ICML
[Learning Adversarially Robust Representations via Worst-Case Mutual Information Maximization] πICML
[Overfitting in adversarially robust deep learning] πICML
[Proper Network Interpretability Helps Adversarial Robustness in Classification]ICML
[Randomization matters How to defend against strong adversarial attacks]ICML
[Reliable Evaluation of Adversarial Robustness with an Ensemble of Diverse Parameter-free Attacks]ICML
[Towards Understanding the Regularization of Adversarial Robustness on Neural Networks]CVPR
[Defending Against Universal Attacks Through Selective Feature Regeneration]ARXIV
[Understanding and improving fast adversarial training]ARXIV
[Cat: Customized adversarial training for improved robustness]ICLR
[MMA Training: Direct Input Space Margin Maximization through Adversarial Training]ARXIV
[Bridging the performance gap between fgsm and pgd adversarial training]CVPR
[Adversarial Vertex Mixup: Toward Better Adversarially Robust Generalization]ARXIV
[Towards understanding fast adversarial training]ARXIV
[Overfitting in adversarially robust deep learning]ICLR
[Robust local features for improving the generalization of adversarial training]ICML
[Confidence-Calibrated Adversarial Training: Generalizing to Unseen Attacks]ARXIV
[Regularizers for single-step adversarial training]CVPR
[Single-step adversarial training with dropout scheduling]ICLR
[Improving Adversarial Robustness Requires Revisiting Misclassified Examples]ARXIV
[Fast is better than free: Revisiting adversarial training.]ARXIV
[On the Generalization Properties of Adversarial Training]ARXIV
[A closer look at accuracy vs. robustness]ICLR
[Adversarially robust transfer learning]ARXIV
[On Saliency Maps and Adversarial Robustness]ARXIV
[On Detecting Adversarial Inputs with Entropy of Saliency Maps]ARXIV
[Detecting Adversarial Perturbations with Saliency]ARXIV
[Detection Defense Against Adversarial Attacks with Saliency Map]ARXIV
[Model-based Saliency for the Detection of Adversarial Examples]CVPR
[Auxiliary Training: Towards Accurate and Robust Models]CVPR
[Single-step Adversarial training with Dropout Scheduling]CVPR
[Achieving Robustness in the Wild via Adversarial Mixing With Disentangled Representations]ICML
Test-Time Training with Self-Supervision for Generalization under Distribution ShiftsNeurIPS
[Improving robustness against common corruptions by covariate shift adaptation]CCS
[Gotta Catch'Em All: Using Honeypots to Catch Adversarial Attacks on Neural Networks]ECCV
[A simple way to make neural networks robust against diverse image corruptions]CVPRW
[Role of Spatial Context in Adversarial Robustness for Object Detection]WACV
[Local Gradients Smoothing: Defense against localized adversarial attacks]NeurIPS
[Adversarial Weight Perturbation Helps Robust Generalization]MM
[DIPDefend: Deep Image Prior Driven Defense against Adversarial Examples]ECCV
[Adversarial Data Augmentation viaDe
formation Statistics]
ARXIV
On the Limitations of Denoising Strategies as Adversarial DefensesAAAI
[Understanding catastrophic overfitting in single-step adversarial training]ICLR
[Bag of tricks for adversarial training]ARXIV
[Bridging the Gap Between Adversarial Robustness and Optimization Bias]ICLR
[Perceptual Adversarial Robustness: Defense Against Unseen Threat Models]AAAI
[Adversarial Robustness through Disentangled Representations]ARXIV
[Understanding Robustness of Transformers for Image Classification]CVPR
[Adversarial Robustness under Long-Tailed Distribution]ARXIV
[Adversarial Attacks are Reversible with Natural Supervision]AAAI
[Attribute-Guided Adversarial Training for Robustness to Natural Perturbations]ICLR
[LEARNING PERTURBATION SETS FOR ROBUST MACHINE LEARNING]ICLR
[Improving Adversarial Robustness via Channel-wise Activation Suppressing]AAAI
[Efficient Certification of Spatial Robustness]ARXIV
[Domain Invariant Adversarial Learning]ARXIV
[Learning Defense Transformers for Counterattacking Adversarial Examples]ICLR
[ONLINE ADVERSARIAL PURIFICATION BASED ON SELF-SUPERVISED LEARNING]ARXIV
[Removing Adversarial Noise in Class Activation Feature Space]ARXIV
[Improving Adversarial Robustness Using Proxy Distributions]ARXIV
[Decoder-free Robustness Disentanglement without (Additional) Supervision]ARXIV
[Fighting Gradients with Gradients: Dynamic Defenses against Adversarial Attacks]ARXIV
[Reversible Adversarial Attack based on Reversible Image Transformation]ICLR
[ONLINE ADVERSARIAL PURIFICATION BASED ON SELF-SUPERVISED LEARNING]ARXIV
[Towards Corruption-Agnostic Robust Domain Adaptation]ARXIV
[Adversarially Trained Models with Test-Time Covariate Shift Adaptation]ICLR workshop
[COVARIATE SHIFT ADAPTATION FOR ADVERSARIALLY ROBUST CLASSIFIER]ARXIV
[Self-Supervised Adversarial Example Detection by Disentangled Representation]AAAI
[Adversarial Defence by Diversified Simultaneous Training of Deep Ensembles]ARXIV
[Understanding Catastrophic Overfitting in Adversarial Training]ACM Trans. Multimedia Comput. Commun. Appl
[Towards Corruption-Agnostic Robust Domain Adaptation]ICLR
[TENT: FULLY TEST-TIME ADAPTATION BY ENTROPY MINIMIZATION]ARXIV
[Attacking Adversarial Attacks as A Defense]ICML
[Adversarial purification with Score-based generative models]ARXIV
[Adversarial Visual Robustness by Causal Intervention]CVPR
[MaxUp: Lightweight Adversarial Training With Data Augmentation Improves Neural Network Training]MM
[AdvFilter: Predictive Perturbation-aware Filtering against Adversarial Attack via Multi-domain Learning]CVPR
[Robust and Accurate Object Detection via Adversarial Learning]ARXIV
[Markpainting: Adversarial Machine Learning meets Inpainting]ICLR
[EFFICIENT CERTIFIED DEFENSES AGAINST PATCH ATTACKS ON IMAGE CLASSIFIERS]ARXIV
[Learning Defense Transformers for Counterattacking Adversarial Examples]ARXIV
[Towards Robust Vision Transformer]ARXIV
[Reveal of Vision Transformers Robustness against Adversarial Attacks]ARXIV
[Intriguing Properties of Vision Transformers]ARXIV
[Vision transformers are robust learners]ARXIV
[On Improving Adversarial Transferability of Vision Transformers]ARXIV
[On the adversarial robustness of visual transformers]ARXIV
[On the robustness of vision transformers to adversarial examples]ARXIV
[Understanding Robustness of Transformers for Image Classification]ARXIV
[Regional Adversarial Training for Better Robust Generalization]CCS
[DetectorGuard: Provably Securing Object Detectors against Localized Patch Hiding Attacks]ARXIV
[MODELLING ADVERSARIAL NOISE FOR ADVERSARIAL DEFENSE]ICCV
[Adversarial Example Detection Using Latent Neighborhood Graph]ARXIV
[Identification of Attack-Specific Signatures in Adversarial Examples]Neurips
[How Should Pre-Trained Language Models Be Fine-Tuned Towards Adversarial Robustness?]ARXIV
[Adversarial Robustness Comparison of Vision Transformer and MLP-Mixer to CNNs]ARXIV
[Learning Defense Transformers for Counterattacking Adversarial Examples]ADVM
[Detecting Adversarial Patch Attacks through Global-local Consistency]ICCV
[Can Shape Structure Features Improve Model Robustness under Diverse Adversarial Settings?]ICLR
[Undistillable: Making A Nasty Teacher That CANNOT teach students]ICCV
[Revisiting Adversarial Robustness Distillation: Robust Soft Labels Make Student Better]ARXIV
[Two Coupled Rejection Metrics Can Tell Adversarial Examples Apart]ARXIV
[Consistency Regularization for Adversarial Robustness]ICML
[CIFS: Improving Adversarial Robustness of CNNs via Channel-wise Importance-based Feature Selection]NeurIPS
[Adversarial Neuron Pruning Purifies Backdoored Deep Models]ICCV
[Towards Understanding the Generative Capability of Adversarially Robust Classifiers]NeurIPS
[Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training]NeurIPS
[Data Augmentation Can Improve Robustness]NeurIPS
[When does Contrastive Learning Preserve Adversarial Robustness from Pretraining to Finetuning?]
ARXIV
[$\alpha$ Weighted Federated Adversarial Training]AAAI
[Safe Distillation Box]USENIX
[Transferring Adversarial Robustness Through Robust Representation Matching]ARXIV
[Robustness and Accuracy Could Be Reconcilable by (Proper) Definition]ARXIV
[IMPROVING ADVERSARIAL DEFENSE WITH SELF SUPERVISED TEST-TIME FINE-TUNING]ARXIV
[Exploring Memorization in Adversarial Training]IJCV
[Open-Set Adversarial Defense with Clean-Adversarial Mutual Learning]ARXIV
[Adversarial Detection and Correction by Matching Prediction Distribution]ARXIV
[Enhancing Adversarial Training with Feature Separability]ARXIV
[An Eye for an Eye: Defending against Gradient-based Attacks with Gradients]
ICCV 2017
CVAE-GAN: Fine-Grained Image Generation Through Asymmetric TrainingICML 2016
Autoencoding beyond pixels using a learned similarity metricARXIV 2019
Natural Adversarial ExamplesICML 2017
Conditional Image Synthesis with Auxiliary Classifier {GAN}sICCV 2019
SinGAN: Learning a Generative Model From a Single Natural ImageICLR 2020
Robust And Interpretable Blind Image Denoising Via Bias-Free Convolutional Neural NetworksICLR 2020
Pay Attention to Features, Transfer Learn Faster CNNsICLR 2020
On Robustness of Neural Ordinary Differential EquationsICCV 2019
Real Image Denoising With Feature AttentionICLR 2018
Multi-Scale Dense Networks for Resource Efficient Image ClassificationARXIV 2019
Rethinking Data Augmentation: Self-Supervision and Self-DistillationICCV 2019
Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self DistillationARXIV 2019
Adversarially Robust DistillationARXIV 2019
Knowledge Distillation from Internal RepresentationsICLR 2020
Contrastive Representation Distillation πNIPS 2018
Faster Neural Networks Straight from JPEGARXIV 2019
A Closer Look at Double BackpropagationπCVPR 2016
Learning Deep Features for Discriminative LocalizationICML 2019
Noise2Self: Blind Denoising by Self-SupervisionARXIV 2020
Supervised Contrastive LearningCVPR 2020
High-Frequency Component Helps Explain the Generalization of Convolutional Neural NetworksNIPS 2017
[Counterfactual Fairness]ARXIV 2020
[An Adversarial Approach for Explaining the Predictions of Deep Neural Networks]CVPR 2014
[Rich feature hierarchies for accurate object detection and semantic segmentation]ICLR 2018
[Spectral Normalization for Generative Adversarial Networks]NIPS 2018
[MetaGAN: An Adversarial Approach to Few-Shot Learning]ARXIV 2019
[Breaking the cycle -- Colleagues are all you need]ARXIV 2019
[LOGAN: Latent Optimisation for Generative Adversarial Networks]ICML 2020
[Margin-aware Adversarial Domain Adaptation with Optimal Transport]ICML 2020
[Representation Learning Using Adversarially-Contrastive Optimal Transport]ICLR 2021
[Free Lunch for Few-shot Learning: Distribution Calibration]CVPR 2019
[Unprocessing Images for Learned Raw Denoising]TPAMI 2020
[Image Quality Assessment: Unifying Structure and Texture Similarity]CVPR 2020
[Dreaming to Distill: Data-free Knowledge Transfer via DeepInversion]ICLR 2021
[WHAT SHOULD NOT BE CONTRASTIVE IN CONTRASTIVE LEARNING]ARXIV
[MT3: Meta Test-Time Training for Self-Supervised Test-Time Adaption]ARXIV
[UNSUPERVISED DOMAIN ADAPTATION THROUGH SELF-SUPERVISION]ARXIV
[Estimating Example Difficulty using Variance of Gradients]ICML 2020
[Transfer Learning without Knowing: Reprogramming Black-box Machine Learning Models with Scarce Data and Limited Resources]ARXIV
[DATASET DISTILLATION]ARXIV 2022
[Debugging Differential Privacy: A Case Study for Privacy Auditing]ARXIV
[Adversarial Robustness and Catastrophic Forgetting]