EMAG: Self-Rectifying Diffusion Sampling with Exponential Moving Average Guidance
About
In diffusion and flow-matching generative models, guidance techniques are widely used to improve sample quality and consistency. Classifier-free guidance (CFG) is the de facto choice in modern systems and achieves this by contrasting conditional and unconditional samples. Recent work explores contrasting negative samples at inference using a weaker model, via strong/weak model pairs, attention-based masking, stochastic block dropping, or perturbations to the self-attention energy landscape. While these strategies refine the generation quality, they still lack reliable control over the granularity or difficulty of the negative samples, and target-layer selection is often fixed. We propose Exponential Moving Average Guidance (EMAG), a training-free mechanism that modifies attention at inference time in diffusion transformers, with a statistics-based, adaptive layer-selection rule. Unlike prior methods, EMAG produces harder, semantically faithful negatives (fine-grained degradations), surfacing difficult failure modes, enabling the denoiser to refine subtle artifacts, boosting the quality and human preference score (HPS) by +0.46 over CFG. We further demonstrate that EMAG naturally composes with advanced guidance techniques, such as APG and CADS, further improving HPS.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Class-conditional Image Generation | ImageNet 256x256 (val) | FID4.16 | 293 | |
| Class-conditional Image Generation | ImageNet 512x512 (val) | FID (Val)7.59 | 69 | |
| Text-to-Image Generation | COCO 2014 (val) | Precision68.5 | 25 | |
| Unconditional Generation | COCO 2014 (val) | FID74.98 | 5 | |
| Unconditional Generation | ImageNet 256 | FID29.85 | 5 | |
| Unconditional Image Generation | ImageNet 512x512 (test) | FID45.01 | 5 |