Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Erasing Concepts from Diffusion Models

About

Motivated by recent advancements in text-to-image diffusion, we study erasure of specific concepts from the model's weights. While Stable Diffusion has shown promise in producing explicit or realistic artwork, it has raised concerns regarding its potential for misuse. We propose a fine-tuning method that can erase a visual concept from a pre-trained diffusion model, given only the name of the style and using negative guidance as a teacher. We benchmark our method against previous approaches that remove sexually explicit content and demonstrate its effectiveness, performing on par with Safe Latent Diffusion and censored training. To evaluate artistic style removal, we conduct experiments erasing five modern artists from the network and conduct a user study to assess the human perception of the removed styles. Unlike previous methods, our approach can remove concepts from a diffusion model permanently rather than modifying the output at the inference time, so it cannot be circumvented even if a user has access to model weights. Our code, data, and results are available at https://erasing.baulab.info/

Rohit Gandikota, Joanna Materzynska, Jaden Fiotto-Kaufman, David Bau• 2023

Related benchmarks

TaskDatasetResultRank
Text-to-Image GenerationMS-COCO (val)
FID32.47
202
Text-to-Image GenerationMS-COCO
FID21.01
131
Continual Concept Learning10 Sequential Concepts (test)
UA99
70
Coarse-grained UnlearningImagenette
Atar5.2
70
Text-to-Image GenerationMS-COCO (30K)
FID (30K)16.88
62
Text-to-Image GenerationCOCO
FID38.15
61
Text-to-Image GenerationMSCOCO 30K
FID15.19
54
Text-to-Image GenerationCOCO 30k
FID13.68
53
Class-wise ForgettingImageNette (val)
FID0.8
44
Text-to-Image GenerationMS-COCO 30k (val)
FID16.87
42
Showing 10 of 280 rows
...

Other info

Follow for update