Delta-K: Boosting Multi-Instance Generation via Cross-Attention Augmentation
About
While Diffusion Models excel in text-to-image synthesis, they often suffer from concept omission when synthesizing complex multi-instance scenes. Existing training-free methods attempt to resolve this by rescaling attention maps, which merely exacerbates unstructured noise without establishing coherent semantic representations. To address this, we propose Delta-K, a backbone-agnostic and plug-and-play inference framework that tackles omission by operating directly in the shared cross-attention Key space. Specifically, with Vision-language model, we extract a differential key $\Delta K$ that encodes the semantic signature of missing concepts. This signal is then injected during the early semantic planning stage of the diffusion process. Governed by a dynamically optimized scheduling mechanism, Delta-K grounds diffuse noise into stable structural anchors while preserving existing concepts. Extensive experiments demonstrate the generality of our approach: Delta-K consistently improves compositional alignment across both modern DiT models and classical U-Net architectures, without requiring spatial masks, additional training, or architectural modifications.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Text-to-Image Generation | T2I-CompBench | Shape Fidelity59.84 | 185 | |
| Compositional Image Generation | GenEval | Overall Score0.58 | 44 | |
| Multi-Instance Generation | ConceptMix k=5 | Success Rate9 | 5 | |
| Multi-Instance Generation | ConceptMix k=6 | Success Rate4 | 5 | |
| Multi-Instance Generation | ConceptMix k=7 | Success Rate3 | 5 |