Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

VillanDiffusion: A Unified Backdoor Attack Framework for Diffusion Models

About

Diffusion Models (DMs) are state-of-the-art generative models that learn a reversible corruption process from iterative noise addition and denoising. They are the backbone of many generative AI applications, such as text-to-image conditional generation. However, recent studies have shown that basic unconditional DMs (e.g., DDPM and DDIM) are vulnerable to backdoor injection, a type of output manipulation attack triggered by a maliciously embedded pattern at model input. This paper presents a unified backdoor attack framework (VillanDiffusion) to expand the current scope of backdoor analysis for DMs. Our framework covers mainstream unconditional and conditional DMs (denoising-based and score-based) and various training-free samplers for holistic evaluations. Experiments show that our unified framework facilitates the backdoor analysis of different DM configurations and provides new insights into caption-based backdoor attacks on DMs. Our code is available on GitHub: \url{https://github.com/IBM/villandiffusion}

Sheng-Yen Chou, Pin-Yu Chen, Tsung-Yi Ho• 2023

Related benchmarks

TaskDatasetResultRank
Backdoor DetectionBalanced 50% clean, 50% backdoored (test)
Detection Accuracy98.2
28
Backdoor Attack on Text-to-Image Diffusion ModelsText-to-Image (T2I) Diffusion Models (evaluation set)
CLIP Score (p)24.03
8
Showing 2 of 2 rows

Other info

Code

Follow for update