Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Grounded Text-to-Image Synthesis with Attention Refocusing

About

Driven by the scalable diffusion models trained on large-scale datasets, text-to-image synthesis methods have shown compelling results. However, these models still fail to precisely follow the text prompt involving multiple objects, attributes, or spatial compositions. In this paper, we reveal the potential causes in the diffusion model's cross-attention and self-attention layers. We propose two novel losses to refocus attention maps according to a given spatial layout during sampling. Creating the layouts manually requires additional effort and can be tedious. Therefore, we explore using large language models (LLM) to produce these layouts for our method. We conduct extensive experiments on the DrawBench, HRS, and TIFA benchmarks to evaluate our proposed method. We show that our proposed attention refocusing effectively improves the controllability of existing approaches.

Quynh Phung, Songwei Ge, Jia-Bin Huang• 2023

Related benchmarks

TaskDatasetResultRank
Video GenerationCVGBench-m
Subject Consistency97.48
16
Video GenerationCVGBench-p
Subject Consistency97.9
16
Object CountingHRS-Bench
Precision87.93
8
Spatial ReasoningHRS
Accuracy48.29
8
Spatial ReasoningNSR-1K
Accuracy69.28
8
Numerical ReasoningHRS
Precision77.43
8
Grounding AccuracyHRS
Spatial Accuracy24.45
8
Grounding AccuracyDrawBench
Spatial43.5
8
Numerical ReasoningNSR-1K
Precision84.61
8
Layout-to-Image GenerationDrawBench
Spatial Score40
8
Showing 10 of 19 rows

Other info

Follow for update