Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

EMAG: Self-Rectifying Diffusion Sampling with Exponential Moving Average Guidance

About

In diffusion and flow-matching generative models, guidance techniques are widely used to improve sample quality and consistency. Classifier-free guidance (CFG) is the de facto choice in modern systems and achieves this by contrasting conditional and unconditional samples. Recent work explores contrasting negative samples at inference using a weaker model, via strong/weak model pairs, attention-based masking, stochastic block dropping, or perturbations to the self-attention energy landscape. While these strategies refine the generation quality, they still lack reliable control over the granularity or difficulty of the negative samples, and target-layer selection is often fixed. We propose Exponential Moving Average Guidance (EMAG), a training-free mechanism that modifies attention at inference time in diffusion transformers, with a statistics-based, adaptive layer-selection rule. Unlike prior methods, EMAG produces harder, semantically faithful negatives (fine-grained degradations), surfacing difficult failure modes, enabling the denoiser to refine subtle artifacts, boosting the quality and human preference score (HPS) by +0.46 over CFG. We further demonstrate that EMAG naturally composes with advanced guidance techniques, such as APG and CADS, further improving HPS.

Ankit Yadav, Ta Duc Huy, Lingqiao Liu• 2025

Related benchmarks

TaskDatasetResultRank
Class-conditional Image GenerationImageNet 256x256 (val)
FID4.16
293
Class-conditional Image GenerationImageNet 512x512 (val)
FID (Val)7.59
69
Text-to-Image GenerationCOCO 2014 (val)
Precision68.5
25
Unconditional GenerationCOCO 2014 (val)
FID74.98
5
Unconditional GenerationImageNet 256
FID29.85
5
Unconditional Image GenerationImageNet 512x512 (test)
FID45.01
5
Showing 6 of 6 rows

Other info

Follow for update