Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Training-Free Layout Control with Cross-Attention Guidance

About

Recent diffusion-based generators can produce high-quality images from textual prompts. However, they often disregard textual instructions that specify the spatial layout of the composition. We propose a simple approach that achieves robust layout control without the need for training or fine-tuning of the image generator. Our technique manipulates the cross-attention layers that the model uses to interface textual and visual information and steers the generation in the desired direction given, e.g., a user-specified layout. To determine how to best guide attention, we study the role of attention maps and explore two alternative strategies, forward and backward guidance. We thoroughly evaluate our approach on three benchmarks and provide several qualitative examples and a comparative analysis of the two strategies that demonstrate the superiority of backward guidance compared to forward guidance, as well as prior work. We further demonstrate the versatility of layout guidance by extending it to applications such as editing the layout and context of real images.

Minghao Chen, Iro Laina, Andrea Vedaldi• 2023

Related benchmarks

TaskDatasetResultRank
Text-to-Image GenerationT2I-CompBench++--
65
Text-to-Image GenerationVISOR
OA (%)40.01
21
Layout-to-Image GenerationCOCO-Position 2014
AP1.75
12
Sequential Image EditingMulti-Edit Benchmark
BLEU-236.44
9
Layout-to-Image GenerationDrawBench
Spatial Score45
8
Object CountingHRS-Bench
Precision87.25
8
GroundingHRS-Spatial
mIoU0.199
8
GroundingCustom Dataset
mIoU12.2
8
Text-to-Image GenerationDrawBench
Spatial Fidelity (Human)53.13
8
GroundingMS-COCO 2014
mIoU30.7
8
Showing 10 of 27 rows

Other info

Follow for update