Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Beyond the Noise: Aligning Prompts with Latent Representations in Diffusion Models

About

Conditional diffusion models rely on language-to-image alignment methods to steer the generation towards semantically accurate outputs. Despite the success of this architecture, misalignment and hallucinations remain common issues and require automatic misalignment detection tools to improve quality, for example by applying them in a Best-of-N (BoN) post-generation setting. Unfortunately, measuring the alignment after the generation is an expensive step since we need to wait for the overall generation to finish to determine prompt adherence. In contrast, this work hypothesizes that text/image misalignments can be detected early in the denoising process, enabling real-time alignment assessment without waiting for the complete generation. In particular, we propose NoisyCLIP a method that measures semantic alignment in the noisy latent space. This work is the first to explore and benchmark prompt-to-latent misalignment detection during image generation using dual encoders in the reverse diffusion process. We evaluate NoisyCLIP qualitatively and quantitatively and find it reduces computational cost by 50% while achieving 98% of CLIP alignment performance in BoN settings. This approach enables real-time alignment assessment during generation, reducing costs without sacrificing semantic fidelity.

Vasco Ramos, Regev Cohen, Idan Szpektor, Joao Magalhaes• 2025

Related benchmarks

TaskDatasetResultRank
Best-of-N SelectionNoisy-Concept-Captions Denoised Latent, iterations 21-30
VQAScore71.8
4
Factual ConsistencyNoisy-Concept-Captions Denoised Latent, iterations 21-30
R@150.6
4
Image-text alignmentGenAI-Bench Basic
Alignment Score26.2
3
Image-text alignmentGenAI-Bench Advanced
Alignment Score0.25
3
Best-of-N SelectionNoisy-Concept-Captions Final Image iteration 50--
2
Showing 5 of 5 rows

Other info

Follow for update