Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SAFREE: Training-Free and Adaptive Guard for Safe Text-to-Image And Video Generation

About

Recent advances in diffusion models have significantly enhanced their ability to generate high-quality images and videos, but they have also increased the risk of producing unsafe content. Existing unlearning/editing-based methods for safe generation remove harmful concepts from models but face several challenges: (1) They cannot instantly remove harmful concepts without training. (2) Their safe generation capabilities depend on collected training data. (3) They alter model weights, risking degradation in quality for content unrelated to toxic concepts. To address these, we propose SAFREE, a novel, training-free approach for safe T2I and T2V, that does not alter the model's weights. Specifically, we detect a subspace corresponding to a set of toxic concepts in the text embedding space and steer prompt embeddings away from this subspace, thereby filtering out harmful content while preserving intended semantics. To balance the trade-off between filtering toxicity and preserving safe concepts, SAFREE incorporates a novel self-validating filtering mechanism that dynamically adjusts the denoising steps when applying the filtered embeddings. Additionally, we incorporate adaptive re-attention mechanisms within the diffusion latent space to selectively diminish the influence of features related to toxic concepts at the pixel level. In the end, SAFREE ensures coherent safety checking, preserving the fidelity, quality, and safety of the output. SAFREE achieves SOTA performance in suppressing unsafe content in T2I generation compared to training-free baselines and effectively filters targeted concepts while maintaining high-quality images. It also shows competitive results against training-based methods. We extend SAFREE to various T2I backbones and T2V tasks, showcasing its flexibility and generalization. SAFREE provides a robust and adaptable safeguard for ensuring safe visual generation.

Jaehong Yoon, Shoubin Yu, Vaidehi Patil, Huaxiu Yao, Mohit Bansal• 2024

Related benchmarks

TaskDatasetResultRank
Text-to-Image GenerationCOCO
FID43.78
51
Concept UnlearningUnlearnDiffAtk
UnlearnDiffAtk0.282
36
Text-to-Image GenerationCOCO 30k
FID25.29
29
Safe Text-to-Image GenerationI2P
Inappropriate Probability9
23
Safe Text-to-Image GenerationCoPro V2 (test)
IP9
23
Safe Text-to-Image GenerationUnsafe Diffusion (UD)
IP Score11
23
Safe Text-to-Image GenerationCOCO 3K
FID37.87
23
Concept UnlearningRing-a-Bell
Ring-A-Bell Score11.4
20
Safe Text-to-Image GenerationMMA-Diffusion
Automatic Safety Rate60.1
20
Text-to-Image GenerationNon-targeted concepts
CLIP Score31.1
18
Showing 10 of 34 rows

Other info

Follow for update