ConceptPrune: Concept Editing in Diffusion Models via Skilled Neuron Pruning
About
While large-scale text-to-image diffusion models have demonstrated impressive image-generation capabilities, there are significant concerns about their potential misuse for generating unsafe content, violating copyright, and perpetuating societal biases. Recently, the text-to-image generation community has begun addressing these concerns by editing or unlearning undesired concepts from pre-trained models. However, these methods often involve data-intensive and inefficient fine-tuning or utilize various forms of token remapping, rendering them susceptible to adversarial jailbreaks. In this paper, we present a simple and effective training-free approach, ConceptPrune, wherein we first identify critical regions within pre-trained models responsible for generating undesirable concepts, thereby facilitating straightforward concept unlearning via weight pruning. Experiments across a range of concepts including artistic styles, nudity, object erasure, and gender debiasing demonstrate that target concepts can be efficiently erased by pruning a tiny fraction, approximately 0.12% of total weights, enabling multi-concept erasure and robustness against various white-box and black-box adversarial attacks.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Text-to-Image Generation | MS-COCO (val) | FID29.56 | 112 | |
| Concept Unlearning | UnlearnDiffAtk | UnlearnDiffAtk64.8 | 36 | |
| Image Generation | MS-COCO 30k (val) | FID18.4 | 22 | |
| Concept Unlearning | Ring-a-Bell | Ring-A-Bell Score59.8 | 20 | |
| Machine Unlearning | Imagenette | Accuracy (garbage truck)5.3 | 18 | |
| Concept Unlearning (NSFW) | IGMU (standard evaluation) | FSR89.68 | 12 | |
| Concept Unlearning Preservation | NSFW | CSDR7.4 | 12 | |
| Common Robustness | I2P | ASR71.83 | 12 | |
| Common Robustness | MMA | ASR75.7 | 12 | |
| Explicit Content Unlearning | I2P | Armpits36 | 11 |