Zero-Shot Text-to-Image Generation
About
Text-to-image generation has traditionally focused on finding better modeling assumptions for training on a fixed dataset. These assumptions might involve complex architectures, auxiliary losses, or side information such as object part labels or segmentation masks supplied during training. We describe a simple approach for this task based on a transformer that autoregressively models the text and image tokens as a single stream of data. With sufficient data and scale, our approach is competitive with previous domain-specific models when evaluated in a zero-shot fashion.
Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, Ilya Sutskever• 2021
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Text-to-Image Generation | MS-COCO 2014 (val) | FID27.5 | 128 | |
| Text-to-Image Generation | MS-COCO (val) | FID17.89 | 112 | |
| Image Reconstruction | ImageNet1K (val) | FID1.49 | 83 | |
| Text-to-Image Generation | MS-COCO | FID27.5 | 75 | |
| Image Reconstruction | COCO 2017 (val) | PSNR25.15 | 54 | |
| Text-to-Image Generation | MS-COCO 256x256 (val) | FID17.89 | 53 | |
| Text-to-Image Generation | COCO 30k subset 2014 (val) | FID17.89 | 46 | |
| Text-to-Image Generation | MSCOCO 30K | FID27.5 | 42 | |
| Text-to-Image Generation | MS COCO zero-shot | FID27.5 | 42 | |
| Text-to-Image Generation | MS-COCO 30K (test) | FID27.5 | 41 |
Showing 10 of 34 rows