Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Rethinking and Red-Teaming Protective Perturbation in Personalized Diffusion Models

About

Personalized diffusion models (PDMs) have become prominent for adapting pre-trained text-to-image models to generate images of specific subjects using minimal training data. However, PDMs are susceptible to minor adversarial perturbations, leading to significant degradation when fine-tuned on corrupted datasets. These vulnerabilities are exploited to create protective perturbations that prevent unauthorized image generation. Existing purification methods attempt to red-team the protective perturbation to break the protection but often over-purify images, resulting in information loss. In this work, we conduct an in-depth analysis of the fine-tuning process of PDMs through the lens of shortcut learning. We hypothesize and empirically demonstrate that adversarial perturbations induce a latent-space misalignment between images and their text prompts in the CLIP embedding space. This misalignment causes the model to erroneously associate noisy patterns with unique identifiers during fine-tuning, resulting in poor generalization. Based on these insights, we propose a systematic red-teaming framework that includes data purification and contrastive decoupling learning. We first employ off-the-shelf image restoration techniques to realign images with their original semantic content in latent space. Then, we introduce contrastive decoupling learning with noise tokens to decouple the learning of personalized concepts from spurious noise patterns. Our study not only uncovers shortcut learning vulnerabilities in PDMs but also provides a thorough evaluation framework for developing stronger protection. Our extensive evaluation demonstrates its advantages over existing purification methods and its robustness against adaptive perturbations.

Yixin Liu, Ruoxi Chen, Xun Chen, Lichao Sun• 2024

Related benchmarks

TaskDatasetResultRank
Anti-customizationVGG-Face2 (test)--
16
Image PurificationVGGFace2 FSMG (test)
IMS0.23
12
Image PurificationVGGFace2 ASPL (test)
IMS0.09
12
Image PurificationVGGFace2 EASPL (test)
IMS0.09
12
Image PurificationVGGFace2 MetaCloak (test)
IMS0.38
12
Image PurificationVGGFace2 AdvDM (test)
IMS0.29
12
Image PurificationVGGFace2 PhotoGuard (test)
IMS0.24
12
Image PurificationVGGFace2 Glaze (test)
IMS0.31
12
Adversarial PurificationDiffusion-based Purification (val)
LPIPS0.271
7
Showing 9 of 9 rows

Other info

Follow for update