Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Defensive Unlearning with Adversarial Training for Robust Concept Erasure in Diffusion Models

About

Diffusion models (DMs) have achieved remarkable success in text-to-image generation, but they also pose safety risks, such as the potential generation of harmful content and copyright violations. The techniques of machine unlearning, also known as concept erasing, have been developed to address these risks. However, these techniques remain vulnerable to adversarial prompt attacks, which can prompt DMs post-unlearning to regenerate undesired images containing concepts (such as nudity) meant to be erased. This work aims to enhance the robustness of concept erasing by integrating the principle of adversarial training (AT) into machine unlearning, resulting in the robust unlearning framework referred to as AdvUnlearn. However, achieving this effectively and efficiently is highly nontrivial. First, we find that a straightforward implementation of AT compromises DMs' image generation quality post-unlearning. To address this, we develop a utility-retaining regularization on an additional retain set, optimizing the trade-off between concept erasure robustness and model utility in AdvUnlearn. Moreover, we identify the text encoder as a more suitable module for robustification compared to UNet, ensuring unlearning effectiveness. And the acquired text encoder can serve as a plug-and-play robust unlearner for various DM types. Empirically, we perform extensive experiments to demonstrate the robustness advantage of AdvUnlearn across various DM unlearning scenarios, including the erasure of nudity, objects, and style concepts. In addition to robustness, AdvUnlearn also achieves a balanced tradeoff with model utility. To our knowledge, this is the first work to systematically explore robust DM unlearning through AT, setting it apart from existing methods that overlook robustness in concept erasing. Codes are available at: https://github.com/OPTML-Group/AdvUnlearn

Yimeng Zhang, Xin Chen, Jinghan Jia, Yihua Zhang, Chongyu Fan, Jiancheng Liu, Mingyi Hong, Ke Ding, Sijia Liu• 2024

Related benchmarks

TaskDatasetResultRank
Concept ErasureVan Gogh style
FID19.42
39
Nudity ErasureI2P
Total Count203
38
Concept UnlearningUnlearnDiffAtk
UnlearnDiffAtk0.211
36
Explicit Content RemovalI2P
Armpits Count12
28
Image GenerationMS-COCO 10k (test)
FID22.37
24
Text-to-Image GenerationNon-targeted concepts
CLIP Score28.6
18
Concept UnlearningI2P
I2P0.26
17
Generation PreventionIP character
CLIPe0.166
16
Nudity ErasureI2P 1.0 (test)
ASR (UD Attack)3.4
16
Concept ErasureVan-Gogh artistic style 1.4 (test)
FID14.45
15
Showing 10 of 53 rows

Other info

Follow for update