Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Ring-A-Bell! How Reliable are Concept Removal Methods for Diffusion Models?

About

Diffusion models for text-to-image (T2I) synthesis, such as Stable Diffusion (SD), have recently demonstrated exceptional capabilities for generating high-quality content. However, this progress has raised several concerns of potential misuse, particularly in creating copyrighted, prohibited, and restricted content, or NSFW (not safe for work) images. While efforts have been made to mitigate such problems, either by implementing a safety filter at the evaluation stage or by fine-tuning models to eliminate undesirable concepts or styles, the effectiveness of these safety measures in dealing with a wide range of prompts remains largely unexplored. In this work, we aim to investigate these safety mechanisms by proposing one novel concept retrieval algorithm for evaluation. We introduce Ring-A-Bell, a model-agnostic red-teaming tool for T2I diffusion models, where the whole evaluation can be prepared in advance without prior knowledge of the target model. Specifically, Ring-A-Bell first performs concept extraction to obtain holistic representations for sensitive and inappropriate concepts. Subsequently, by leveraging the extracted concept, Ring-A-Bell automatically identifies problematic prompts for diffusion models with the corresponding generation of inappropriate content, allowing the user to assess the reliability of deployed safety mechanisms. Finally, we empirically validate our method by testing online services such as Midjourney and various methods of concept removal. Our results show that Ring-A-Bell, by manipulating safe prompting benchmarks, can transform prompts that were originally regarded as safe to evade existing safety mechanisms, thus revealing the defects of the so-called safety mechanisms which could practically lead to the generation of harmful contents. Our codes are available at https://github.com/chiayi-hsu/Ring-A-Bell.

Yu-Lin Tsai, Chia-Yi Hsu, Chulin Xie, Chih-Hsun Lin, Jia-You Chen, Bo Li, Pin-Yu Chen, Chia-Mu Yu, Chun-Ying Huang• 2023

Related benchmarks

TaskDatasetResultRank
Nudity ErasureNudity Erasure
ASR97.89
48
Attack evaluationVincent Van Gogh artistic style (50 prompts)
Top-1 ASR0.00e+0
15
Attack evaluationPablo Picasso artistic style (50 prompts)
Top-1 ASR0.00e+0
15
Jailbreak AttackVBCDE
ASR2.7
12
Jailbreak AttackUnsafeDiff
Attack Success Rate (ASR)1.7
12
Jailbreak AttackI2P
SC ASR (4 attempts)58.05
11
Object UnlearningObject-Parachute--
11
Style UnlearningVan Gogh style--
11
Adversarial AttackDALL·E 3 commercial (test)
BR0.53
7
Concept AttackI2P Violence concept
FLUX.1 ASR81.7
6
Showing 10 of 32 rows

Other info

Follow for update