Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

One-Dimensional Adapter to Rule Them All: Concepts, Diffusion Models and Erasing Applications

About

The prevalent use of commercial and open-source diffusion models (DMs) for text-to-image generation prompts risk mitigation to prevent undesired behaviors. Existing concept erasing methods in academia are all based on full parameter or specification-based fine-tuning, from which we observe the following issues: 1) Generation alternation towards erosion: Parameter drift during target elimination causes alternations and potential deformations across all generations, even eroding other concepts at varying degrees, which is more evident with multi-concept erased; 2) Transfer inability & deployment inefficiency: Previous model-specific erasure impedes the flexible combination of concepts and the training-free transfer towards other models, resulting in linear cost growth as the deployment scenarios increase. To achieve non-invasive, precise, customizable, and transferable elimination, we ground our erasing framework on one-dimensional adapters to erase multiple concepts from most DMs at once across versatile erasing applications. The concept-SemiPermeable structure is injected as a Membrane (SPM) into any DM to learn targeted erasing, and meantime the alteration and erosion phenomenon is effectively mitigated via a novel Latent Anchoring fine-tuning strategy. Once obtained, SPMs can be flexibly combined and plug-and-play for other DMs without specific re-tuning, enabling timely and efficient adaptation to diverse scenarios. During generation, our Facilitated Transport mechanism dynamically regulates the permeability of each SPM to respond to different input prompts, further minimizing the impact on other concepts. Quantitative and qualitative results across ~40 concepts, 7 DMs and 4 erasing applications have demonstrated the superior erasing of SPM. Our code and pre-tuned SPMs are available on the project page https://lyumengyao.github.io/projects/spm.

Mengyao Lyu, Yuhong Yang, Haiwen Hong, Hui Chen, Xuan Jin, Yuan He, Hui Xue, Jungong Han, Guiguang Ding• 2023

Related benchmarks

TaskDatasetResultRank
Text-to-Image GenerationMS-COCO
FID21.15
131
Coarse-grained UnlearningImagenette
Atar53.6
70
Text-to-Image GenerationMS-COCO (30K)
FID (30K)16.64
62
Text-to-Image GenerationMS-COCO 30k (val)
FID13.53
42
Concept ErasureVan Gogh style
FID16.65
39
Explicit Content RemovalI2P
Armpits Count53
28
Style UnlearningUnlearnCanvas
UA0.6094
25
Concept ErasureP4D
ASR80.8
23
Image GenerationMS-COCO 30k (val)
FID17.4
22
Concept ErasureStanford Dogs (test)
Aer99.2
21
Showing 10 of 92 rows
...

Other info

Code

Follow for update