Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Delta-K: Boosting Multi-Instance Generation via Cross-Attention Augmentation

About

While Diffusion Models excel in text-to-image synthesis, they often suffer from concept omission when synthesizing complex multi-instance scenes. Existing training-free methods attempt to resolve this by rescaling attention maps, which merely exacerbates unstructured noise without establishing coherent semantic representations. To address this, we propose Delta-K, a backbone-agnostic and plug-and-play inference framework that tackles omission by operating directly in the shared cross-attention Key space. Specifically, with Vision-language model, we extract a differential key $\Delta K$ that encodes the semantic signature of missing concepts. This signal is then injected during the early semantic planning stage of the diffusion process. Governed by a dynamically optimized scheduling mechanism, Delta-K grounds diffuse noise into stable structural anchors while preserving existing concepts. Extensive experiments demonstrate the generality of our approach: Delta-K consistently improves compositional alignment across both modern DiT models and classical U-Net architectures, without requiring spatial masks, additional training, or architectural modifications.

Zitong Wang, Zijun Shen, Haohao Xu, Zhengjie Luo, Weibin Wu• 2026

Related benchmarks

TaskDatasetResultRank
Text-to-Image GenerationT2I-CompBench
Shape Fidelity59.84
185
Compositional Image GenerationGenEval
Overall Score0.58
44
Multi-Instance GenerationConceptMix k=5
Success Rate9
5
Multi-Instance GenerationConceptMix k=6
Success Rate4
5
Multi-Instance GenerationConceptMix k=7
Success Rate3
5
Showing 5 of 5 rows

Other info

Follow for update