Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Compositional Visual Generation with Composable Diffusion Models

About

Large text-guided diffusion models, such as DALLE-2, are able to generate stunning photorealistic images given natural language descriptions. While such models are highly flexible, they struggle to understand the composition of certain concepts, such as confusing the attributes of different objects or relations between objects. In this paper, we propose an alternative structured approach for compositional generation using diffusion models. An image is generated by composing a set of diffusion models, with each of them modeling a certain component of the image. To do this, we interpret diffusion models as energy-based models in which the data distributions defined by the energy functions may be explicitly combined. The proposed method can generate scenes at test time that are substantially more complex than those seen in training, composing sentence descriptions, object relations, human facial attributes, and even generalizing to new combinations that are rarely seen in the real world. We further illustrate how our approach may be used to compose pre-trained text-guided diffusion models and generate photorealistic images containing all the details described in the input descriptions, including the binding of certain object attributes that have been shown difficult for DALLE-2. These results point to the effectiveness of the proposed method in promoting structured generalization for visual generation. Project page: https://energy-based-model.github.io/Compositional-Visual-Generation-with-Composable-Diffusion-Models/

Nan Liu, Shuang Li, Yilun Du, Antonio Torralba, Joshua B. Tenenbaum• 2022

Related benchmarks

TaskDatasetResultRank
Text-to-Image GenerationT2I-CompBench
Shape Fidelity32.99
94
Text-to-Image GenerationT2I-CompBench (test)
Color Accuracy40.63
67
Text-to-Image GenerationT2I-CompBench
T2I-CompBench Score0.08
27
Text-to-Image GenerationVISOR
OA (%)23.27
21
Attribute Binding in Text-to-Image GenerationBLIP-VQA
Color Binding40.63
18
Text-to-Image GenerationVFN
Precision46.9
16
Text-to-Image GenerationT2I-CompBench
Color Fidelity0.4063
16
Text-to-Image GenerationUEC-256 (test)
Precision9.5
10
Text-to-Image SynthesisUser study 20 questions (test)
User Preference Rate2.5
7
Text-to-Image GenerationExperiment (ii) One-to-One Correspondence Prompt Set
Missing Objects (Lenient)49.3
6
Showing 10 of 15 rows

Other info

Follow for update