Universal Prompt Optimizer for Safe Text-to-Image Generation
About
Text-to-Image (T2I) models have shown great performance in generating images based on textual prompts. However, these models are vulnerable to unsafe input to generate unsafe content like sexual, harassment and illegal-activity images. Existing studies based on image checker, model fine-tuning and embedding blocking are impractical in real-world applications. Hence, we propose the first universal prompt optimizer for safe T2I (POSI) generation in black-box scenario. We first construct a dataset consisting of toxic-clean prompt pairs by GPT-3.5 Turbo. To guide the optimizer to have the ability of converting toxic prompt to clean prompt while preserving semantic information, we design a novel reward function measuring toxicity and text alignment of generated images and train the optimizer through Proximal Policy Optimization. Experiments show that our approach can effectively reduce the likelihood of various T2I models in generating inappropriate images, with no significant impact on text alignment. It is also flexible to be combined with methods to achieve better performance. Our code is available at https://github.com/wu-zongyu/POSI.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Safe Text-to-Image Generation | CoPro V2 (test) | IP16 | 23 | |
| Safe Text-to-Image Generation | Unsafe Diffusion (UD) | IP Score19 | 23 | |
| Safe Text-to-Image Generation | COCO 3K | FID34.26 | 23 | |
| Safe Text-to-Image Generation | I2P | Inappropriate Probability15 | 23 | |
| Safe Text-to-Image Generation | MMA-Diffusion | -- | 20 | |
| Benign Image Generation Preservation | COCO prompts 2017 | CLIP Score25 | 9 | |
| Image Generation | COCO prompts 2017 | Average Latency (s)6.15 | 9 | |
| NSFW Content Moderation | Malicious NSFW datasets | Unsafe Ratio (Sexually Explicit)45.17 | 9 | |
| Text-to-Image Safety Guarding | SneakyPrompt-N | Unsafe Ratio31.66 | 9 | |
| Text-to-Image Safety Guarding | SneakyPrompt-P | Unsafe Ratio25.13 | 9 |