Modular Energy Steering for Safe Text-to-Image Generation with Foundation Models
About
Controlling the behavior of text-to-image generative models is critical for safe and practical deployment. Existing safety approaches typically rely on model fine-tuning or curated datasets, which can degrade generation quality or limit scalability. We propose an inference-time steering framework that leverages gradient feedback from frozen pretrained foundation models to guide the generation process without modifying the underlying generator. Our key observation is that vision-language foundation models encode rich semantic representations that can be repurposed as off-the-shelf supervisory signals during generation. By injecting such feedback through clean latent estimates at each sampling step, our method formulates safety steering as an energy-based sampling problem. This design enables modular, training-free safety control that is compatible with both diffusion and flow-matching models and can generalize across diverse visual concepts. Experiments demonstrate state-of-the-art robustness against NSFW red-teaming benchmarks and effective multi-target steering, while preserving high generation quality on benign non-targeted prompts. Our framework provides a principled approach for utilizing foundation models as semantic energy estimators, enabling reliable and scalable safety control for text-to-image generation.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Text-to-Image Generation | COCO 30k | FID20.73 | 53 | |
| Safe generation against nudity prompts | MMA-Diffusion | ASR11.5 | 19 | |
| NSFW suppression | Unlearn DiffAtk | ASR7.8 | 18 | |
| NSFW suppression | Ring-a-Bell | ASR1.3 | 18 | |
| NSFW suppression | P4D | ASR11.2 | 16 | |
| NSFW Content Suppression | NSFW generation steering prompts | P4D0.082 | 2 |