Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Zero-Shot Text-to-Image Generation

About

Text-to-image generation has traditionally focused on finding better modeling assumptions for training on a fixed dataset. These assumptions might involve complex architectures, auxiliary losses, or side information such as object part labels or segmentation masks supplied during training. We describe a simple approach for this task based on a transformer that autoregressively models the text and image tokens as a single stream of data. With sufficient data and scale, our approach is competitive with previous domain-specific models when evaluated in a zero-shot fashion.

Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, Ilya Sutskever• 2021

Related benchmarks

TaskDatasetResultRank
Text-to-Image GenerationMS-COCO 2014 (val)
FID27.5
128
Text-to-Image GenerationMS-COCO (val)
FID17.89
112
Image ReconstructionImageNet1K (val)
FID1.49
83
Text-to-Image GenerationMS-COCO
FID27.5
75
Image ReconstructionCOCO 2017 (val)
PSNR25.15
54
Text-to-Image GenerationMS-COCO 256x256 (val)
FID17.89
53
Text-to-Image GenerationCOCO 30k subset 2014 (val)
FID17.89
46
Text-to-Image GenerationMSCOCO 30K
FID27.5
42
Text-to-Image GenerationMS COCO zero-shot
FID27.5
42
Text-to-Image GenerationMS-COCO 30K (test)
FID27.5
41
Showing 10 of 34 rows

Other info

Follow for update