Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Illiterate DALL-E Learns to Compose

About

Although DALL-E has shown an impressive ability of composition-based systematic generalization in image generation, it requires the dataset of text-image pairs and the compositionality is provided by the text. In contrast, object-centric representation models like the Slot Attention model learn composable representations without the text prompt. However, unlike DALL-E its ability to systematically generalize for zero-shot generation is significantly limited. In this paper, we propose a simple but novel slot-based autoencoding architecture, called SLATE, for combining the best of both worlds: learning object-centric representations that allows systematic generalization in zero-shot image generation without text. As such, this model can also be seen as an illiterate DALL-E model. Unlike the pixel-mixture decoders of existing object-centric representation models, we propose to use the Image GPT decoder conditioned on the slots for capturing complex interactions among the slots and pixels. In experiments, we show that this simple and easy-to-implement architecture not requiring a text prompt achieves significant improvement in in-distribution and out-of-distribution (zero-shot) image generation and qualitatively comparable or better slot-attention structure than the models based on mixture decoders.

Gautam Singh, Fei Deng, Sungjin Ahn• 2021

Related benchmarks

TaskDatasetResultRank
Unsupervised Object SegmentationCOCO
mBOi29.1
26
Unsupervised Object SegmentationMOVi-C
FG-ARI49.5
18
Object-Centric LearningPascal
MBO^i35.9
18
Object-Centric LearningMOVi-C
MBO^i39.4
17
Unsupervised Object SegmentationPascal
MBO^i0.359
17
Object DiscoveryCOCO
FG-ARI0.325
13
Object-Centric LearningMOVi-E
MBO^i30.2
13
Object DiscoveryVOC
FG-ARI15.6
12
Unsupervised Object SegmentationMOVi-E
MBO^i30.2
8
Object-Centric LearningCOCO 2017
MBO^i29.1
8
Showing 10 of 13 rows

Other info

Follow for update