🛡 A set of adversarial attacks in PyTorch
-
Updated
Nov 22, 2024 - Python
🛡 A set of adversarial attacks in PyTorch
A reading list for large models safety, security, and privacy (including Awesome LLM Security, Safety, etc.).
Backdoor Stuff in AI/ ML domain
Engineered to help red teams and penetration testers exploit large language model AI solutions vulnerabilities.
Beacon Object File (BOF) launcher - library for executing BOF files in C/C++/Zig applications
Awesome-DL-Security-and-Privacy-Papers
Source code of "TRAP: Targeted Random Adversarial Prompt Honeypot for Black-Box Identification", ACL2024 (findings)
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
a Pytorch library for security research on speaker recognition, released in "Towards Understanding and Mitigating Audio Adversarial Examples for Speaker Recognition" accepted by TDSC
A self-supervised approach to develop latent space embeddings for molecules and provide an analysis on graph adversarial attacks
A page where I host my work on Adversarial Machine Learning
School AI semester project
Official implementation of the paper "Increasing Confidence in Adversarial Robustness Evaluations"
Source code for the BlackBoxNLP 2024 @ EMNLP paper "Enhancing adversarial robustness in Natural Language Inference using explanations"
Adversarial attacks and defenses
On The Impact of Adversarial Training and Transferability on CAN Intrusion Detection Systems
Official PyTorch implementation of "Scanning Trojaned Models Using Out-of-Distribution Samples" (NeurIPS 2024)
The all-in-one tool for comprehensive experimentation with adversarial attacks on image recognition.
AIShield Watchtower: Dive Deep into AI's Secrets! 🔍 Open-source tool by AIShield for AI model insights & vulnerability scans. Secure your AI supply chain today! ⚙️🛡️
Light-weight Medical Image Segmentation
Add a description, image, and links to the adversarial-attacks topic page so that developers can more easily learn about it.
To associate your repository with the adversarial-attacks topic, visit your repo's landing page and select "manage topics."