Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

PEPPER: Perception-Guided Perturbation for Robust Backdoor Defense in Text-to-Image Diffusion Models

About

Recent studies show that text to image (T2I) diffusion models are vulnerable to backdoor attacks, where a trigger in the input prompt can steer generation toward harmful or unintended content. Beyond the trigger token itself, backdoor effects can spread to neighboring tokens in the text embedding space. To address this, we introduce PEPPER (PErcePtion Guided PERturbation), a backdoor defense that rewrites the caption into a semantically distant yet visually similar caption while adding unobstructive elements. With this rewriting strategy, PEPPER disrupt the trigger embedded in the input prompt, dilute the influence of trigger tokens and thereby achieve enhanced robustness. Experiments show that PEPPER is particularly effective against text encoder based attacks, substantially reducing attack success while preserving generation quality. Beyond this, PEPPER can be paired with any existing defenses yielding consistently stronger and generalizable robustness than any standalone method. Our code will be released on Github.

Oscar Chew, Po-Yi Lu, Jayden Lin, Kuan-Hao Huang, Hsuan-Tien Lin• 2025

Related benchmarks

TaskDatasetResultRank
Backdoor DefenseShort Prompt Dataset
ASR (CLIP)100
27
Backdoor DefenseCOCO long prompts VD attack
ASR (CLIP)3
9
Backdoor DefenseCOCO long prompts (EE attack)
ASR (CLIP)3
6
Backdoor DefenseCOCO long prompts (RR attack)
ASR (CLIP)0.00e+0
6
Backdoor DefenseCOCO long prompts TI attack
ASR (CLIP)0.00e+0
6
Showing 5 of 5 rows

Other info

Follow for update