"Neural Computing and Applications" Published Paper (2023)
-
Updated
Oct 10, 2024 - Python
"Neural Computing and Applications" Published Paper (2023)
Evaluating CNN robustness against various adversarial attacks, including FGSM and PGD.
This work is based on enhancing the robustness of targeted classifier models against adversarial attacks. To achieve this, a convolutional autoencoder-based approach is employed that effectively counters adversarial perturbations introduced to the input images.
Adversarial Network Attacks (PGD, pixel, FGSM) Noise on MNIST Images Dataset using Python (Pytorch)
Adversarial defense by retreaval-based methods
Developed robust image classification models to prevent the effect of adversarial attacks
A classical-quantum or hybrid neural network with adversarial defense protection
Implementations for several white-box and black-box attacks.
A classical or convolutional neural network model with adversarial defense protection
An ASR (Automatic Speech Recognition) adversarial attack repository.
vanilla training and adversarial training in PyTorch
Implementation of PGD attack on a model trained on cifar10 dataset in TensorFlow. Also, FID between original images and generated images has been calculated.
Add a description, image, and links to the pgd-attack topic page so that developers can more easily learn about it.
To associate your repository with the pgd-attack topic, visit your repo's landing page and select "manage topics."