- [2024/11] On the Privacy Risk of In-context Learning
- [2024/11] Membership Inference Attacks against Large Vision-Language Models
- [2024/10] Mask-based Membership Inference Attacks for Retrieval-Augmented Generation
- [2024/10] PSY: Posterior Sampling Based Privacy Enhancer in Large Language Models
- [2024/10] Identity-Focused Inference and Extraction Attacks on Diffusion Models
- [2024/10] Detecting Training Data of Large Language Models via Expectation Maximization
- [2024/09] Membership Inference Attacks Cannot Prove that a Model Was Trained On Your Data
- [2024/09] Predicting and analyzing memorization within fine-tuned Large Language Models
- [2024/09] Context-Aware Membership Inference Attacks against Pre-trained Large Language Models
- [2024/09] Order of Magnitude Speedups for LLM Membership Inference
- [2024/09] Con-ReCall: Detecting Pre-training Data in LLMs via Contrastive Decoding
- [2024/09] Membership Inference Attacks Against In-Context Learning
- [2024/08] PrivacyLens: Evaluating Privacy Norm Awareness of Language Models in Action
- [2024/08] MIA-Tuner: Adapting Large Language Models as Pre-training Text Detector
- [2024/08] Nob-MIAs: Non-biased Membership Inference Attacks Assessment on Large Language Models with Ex-Post Dataset Construction
- [2024/07] Adaptive Pre-training Data Detection for Large Language Models via Surprising Tokens
- [2024/06] Seeing Is Believing: Black-Box Membership Inference Attacks Against Retrieval Augmented Generation
- [2024/06] Inherent Challenges of Post-Hoc Membership Inference for Large Language Models
- [2024/06] Blind Baselines Beat Membership Inference Attacks for Foundation Models
- [2024/06] Noisy Neighbors: Efficient membership inference attacks against LLMs
- [2024/06] LLM Dataset Inference: Did you train on my dataset?
- [2024/05] Is My Data in Your Retrieval Database? Membership Inference Attacks Against Retrieval Augmented Generation
- [2024/05] Towards Black-Box Membership Inference Attack for Diffusion Models
- [2024/05] Membership Inference on Text-to-Image Diffusion Models via Conditional Likelihood Discrepancy
- [2024/04] Sampling-based Pseudo-Likelihood for Membership Inference Attacks
- [2024/02] Do Membership Inference Attacks Work on Large Language Models?
- [2023/12] Black-box Membership Inference Attacks against Fine-tuned Diffusion Models
- [2023/11] Practical Membership Inference Attacks against Fine-tuned Large Language Models via Self-prompt Calibration
- [2023/10] User Inference Attacks on Large Language Models
- [2023/09] An Efficient Membership Inference Attack for the Diffusion Model by Proximal Initialization
- [2023/08] White-box Membership Inference Attacks against Diffusion Models
- [2023/03] Class Attribute Inference Attacks: Inferring Sensitive Class Information by Diffusion-Based Attribute Manipulations
- [2022/10] Membership Inference Attacks Against Text-to-image Generation Models