Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Modular Energy Steering for Safe Text-to-Image Generation with Foundation Models

About

Controlling the behavior of text-to-image generative models is critical for safe and practical deployment. Existing safety approaches typically rely on model fine-tuning or curated datasets, which can degrade generation quality or limit scalability. We propose an inference-time steering framework that leverages gradient feedback from frozen pretrained foundation models to guide the generation process without modifying the underlying generator. Our key observation is that vision-language foundation models encode rich semantic representations that can be repurposed as off-the-shelf supervisory signals during generation. By injecting such feedback through clean latent estimates at each sampling step, our method formulates safety steering as an energy-based sampling problem. This design enables modular, training-free safety control that is compatible with both diffusion and flow-matching models and can generalize across diverse visual concepts. Experiments demonstrate state-of-the-art robustness against NSFW red-teaming benchmarks and effective multi-target steering, while preserving high generation quality on benign non-targeted prompts. Our framework provides a principled approach for utilizing foundation models as semantic energy estimators, enabling reliable and scalable safety control for text-to-image generation.

Yaoteng Tan, Zikui Cai, M. Salman Asif• 2026

Related benchmarks

TaskDatasetResultRank
Text-to-Image GenerationCOCO 30k
FID20.73
53
Safe generation against nudity promptsMMA-Diffusion
ASR11.5
19
NSFW suppressionUnlearn DiffAtk
ASR7.8
18
NSFW suppressionRing-a-Bell
ASR1.3
18
NSFW suppressionP4D
ASR11.2
16
NSFW Content SuppressionNSFW generation steering prompts
P4D0.082
2
Showing 6 of 6 rows

Other info

Follow for update