DiffGuard: Text-Based Safety Checker for Diffusion Models
About
Recent advances in Diffusion Models have enabled the generation of images from text, with powerful closed-source models like DALL-E and Midjourney leading the way. However, open-source alternatives, such as StabilityAI's Stable Diffusion, offer comparable capabilities. These open-source models, hosted on Hugging Face, come equipped with ethical filter protections designed to prevent the generation of explicit images. This paper reveals first their limitations and then presents a novel text-based safety filter that outperforms existing solutions. Our research is driven by the critical need to address the misuse of AI-generated content, especially in the context of information warfare. DiffGuard enhances filtering efficacy, achieving a performance that surpasses the best existing filters by over 14%.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Harmful prompt detection | ViSU | Precision27 | 11 | |
| Harmful prompt detection | COCO | Accuracy99 | 6 | |
| Harmful prompt detection | adv-ViSU | Precision97 | 6 | |
| Harmful prompt detection | NSFW56k | Accuracy89 | 6 | |
| Harmful prompt detection | I2P | Accuracy28 | 6 | |
| Harmful prompt detection | adv-MMA | Precision89 | 6 | |
| Harmful prompt detection | MMA | Precision47 | 6 | |
| Harmful prompt detection | Sneakyprompt | Precision46 | 6 | |
| Harmful prompt detection | ViSU-sp | Precision92 | 5 | |
| Harmful prompt detection | ViSU-fr | Precision81 | 5 |