CogView2: Faster and Better Text-to-Image Generation via Hierarchical Transformers
About
The development of the transformer-based text-to-image models are impeded by its slow generation and complexity for high-resolution images. In this work, we put forward a solution based on hierarchical transformers and local parallel auto-regressive generation. We pretrain a 6B-parameter transformer with a simple and flexible self-supervised task, Cross-modal general language model (CogLM), and finetune it for fast super-resolution. The new text-to-image system, CogView2, shows very competitive generation compared to concurrent state-of-the-art DALL-E-2, and naturally supports interactive text-guided editing on images.
Ming Ding, Wendi Zheng, Wenyi Hong, Jie Tang• 2022
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Text-to-Image Generation | MS-COCO 2014 (val) | FID17.7 | 128 | |
| Text-to-Image Generation | MS-COCO (val) | FID17.7 | 112 | |
| Text-to-Image Generation | MS-COCO 256x256 (val) | -- | 53 | |
| Text-to-Image Generation | COCO 30k subset 2014 (val) | FID17.7 | 46 | |
| Text-to-Image Synthesis | COCO (test) | FID24 | 38 | |
| Text-to-Image Generation | MS-COCO Captions 30,000 (val) | FID-017.5 | 21 | |
| Text-to-Image Generation | Real User Prompts | Human Rank6 | 6 |
Showing 7 of 7 rows