Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Erasing Undesirable Concepts in Diffusion Models with Adversarial Preservation

About

Diffusion models excel at generating visually striking content from text but can inadvertently produce undesirable or harmful content when trained on unfiltered internet data. A practical solution is to selectively removing target concepts from the model, but this may impact the remaining concepts. Prior approaches have tried to balance this by introducing a loss term to preserve neutral content or a regularization term to minimize changes in the model parameters, yet resolving this trade-off remains challenging. In this work, we propose to identify and preserving concepts most affected by parameter changes, termed as \textit{adversarial concepts}. This approach ensures stable erasure with minimal impact on the other concepts. We demonstrate the effectiveness of our method using the Stable Diffusion model, showing that it outperforms state-of-the-art erasure methods in eliminating unwanted content while maintaining the integrity of other unrelated elements. Our code is available at https://github.com/tuananhbui89/Erasing-Adversarial-Preservation.

Anh Bui, Long Vuong, Khanh Doan, Trung Le, Paul Montague, Tamas Abraham, Dinh Phung• 2024

Related benchmarks

TaskDatasetResultRank
Text-to-Image GenerationMS-COCO
FID22.84
131
Nudity ErasureI2P
Total Count386
38
Image GenerationMS-COCO 10k (test)
FID22.3
24
Utility PreservationMS-COCO 10k
FID28.3
22
Utility PreservationCOCO-10K (val)
FID27.32
20
Violence ErasureI2P
Total447
12
Object ErasureGrabage Truck prompts
ESR (1)100
11
Object ErasureCassette Player prompts
ESR (k=1)100
11
Content PreservationMS-COCO (30K)
FID14.71
11
Object ErasureFive Objects prompts
ESR (1 Object)92.2
11
Showing 10 of 38 rows

Other info

Code

Follow for update