Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Modeling Image Composition for Complex Scene Generation

About

We present a method that achieves state-of-the-art results on challenging (few-shot) layout-to-image generation tasks by accurately modeling textures, structures and relationships contained in a complex scene. After compressing RGB images into patch tokens, we propose the Transformer with Focal Attention (TwFA) for exploring dependencies of object-to-object, object-to-patch and patch-to-patch. Compared to existing CNN-based and Transformer-based generation models that entangled modeling on pixel-level&patch-level and object-level&patch-level respectively, the proposed focal attention predicts the current patch token by only focusing on its highly-related tokens that specified by the spatial layout, thereby achieving disambiguation during training. Furthermore, the proposed TwFA largely increases the data efficiency during training, therefore we propose the first few-shot complex scene generation strategy based on the well-trained TwFA. Comprehensive experiments show the superiority of our method, which significantly increases both quantitative metrics and qualitative visual realism with respect to state-of-the-art CNN-based and transformer-based methods. Code is available at https://github.com/JohnDreamer/TwFA.

Zuopeng Yang, Daqing Liu, Chaoyue Wang, Jie Yang, Dacheng Tao• 2022

Related benchmarks

TaskDatasetResultRank
Layout-to-Image SynthesisVisual Genome (VG) (test)
FID17.74
35
Layout-to-Image SynthesisCoco-Stuff (test)
FID22.15
25
Layout-to-Image GenerationCOCO 2017 (val)
FID22.15
14
Showing 3 of 3 rows

Other info

Code

Follow for update