Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

DiffGuard: Text-Based Safety Checker for Diffusion Models

About

Recent advances in Diffusion Models have enabled the generation of images from text, with powerful closed-source models like DALL-E and Midjourney leading the way. However, open-source alternatives, such as StabilityAI's Stable Diffusion, offer comparable capabilities. These open-source models, hosted on Hugging Face, come equipped with ethical filter protections designed to prevent the generation of explicit images. This paper reveals first their limitations and then presents a novel text-based safety filter that outperforms existing solutions. Our research is driven by the critical need to address the misuse of AI-generated content, especially in the context of information warfare. DiffGuard enhances filtering efficacy, achieving a performance that surpasses the best existing filters by over 14%.

Massine El Khader, Elias Al Bouzidi, Abdellah Oumida, Mohammed Sbaihi, Eliott Binard, Jean-Philippe Poli, Wassila Ouerdane, Boussad Addad, Katarzyna Kapusta• 2024

Related benchmarks

TaskDatasetResultRank
Harmful prompt detectionViSU
Precision27
11
Harmful prompt detectionCOCO
Accuracy99
6
Harmful prompt detectionadv-ViSU
Precision97
6
Harmful prompt detectionNSFW56k
Accuracy89
6
Harmful prompt detectionI2P
Accuracy28
6
Harmful prompt detectionadv-MMA
Precision89
6
Harmful prompt detectionMMA
Precision47
6
Harmful prompt detectionSneakyprompt
Precision46
6
Harmful prompt detectionViSU-sp
Precision92
5
Harmful prompt detectionViSU-fr
Precision81
5
Showing 10 of 14 rows

Other info

Follow for update