Hazar Yueksel, Ramon Bertran, et al.
MLSys 2020
Diffusion Models (DMs) are state-of-the-art generative models that learn a reversible corruption process from iterative noise addition and denoising. They are the backbone of many generative AI applications, such as text-to-image conditional generation. However, recent studies have shown that basic unconditional DMs (e.g., DDPM and DDIM) are vulnerable to backdoor injection, a type of output manipulation attack triggered by a maliciously embedded pattern at model input. This paper presents a unified backdoor attack framework (VillanDiffusion) to expand the current scope of backdoor analysis for DMs. Our framework covers mainstream unconditional and conditional DMs (denoising-based and score-based) and various training-free samplers for holistic evaluations. Experiments show that our unified framework facilitates the backdoor analysis of different DM configurations and provides new insights into caption-based backdoor attacks on DMs.
Hazar Yueksel, Ramon Bertran, et al.
MLSys 2020
Saiteja Utpala, Alex Gu, et al.
NAACL 2024
Natalia Martinez Gil, Kanthi Sarpatwar, et al.
NeurIPS 2023
Natalia Martinez Gil, Dhaval Patel, et al.
UAI 2024