Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

CogView2: Faster and Better Text-to-Image Generation via Hierarchical Transformers

About

The development of the transformer-based text-to-image models are impeded by its slow generation and complexity for high-resolution images. In this work, we put forward a solution based on hierarchical transformers and local parallel auto-regressive generation. We pretrain a 6B-parameter transformer with a simple and flexible self-supervised task, Cross-modal general language model (CogLM), and finetune it for fast super-resolution. The new text-to-image system, CogView2, shows very competitive generation compared to concurrent state-of-the-art DALL-E-2, and naturally supports interactive text-guided editing on images.

Ming Ding, Wendi Zheng, Wenyi Hong, Jie Tang• 2022

Related benchmarks

TaskDatasetResultRank
Text-to-Image GenerationMS-COCO 2014 (val)
FID17.7
128
Text-to-Image GenerationMS-COCO (val)
FID17.7
112
Text-to-Image GenerationMS-COCO 256x256 (val)--
53
Text-to-Image GenerationCOCO 30k subset 2014 (val)
FID17.7
46
Text-to-Image SynthesisCOCO (test)
FID24
38
Text-to-Image GenerationMS-COCO Captions 30,000 (val)
FID-017.5
21
Text-to-Image GenerationReal User Prompts
Human Rank6
6
Showing 7 of 7 rows

Other info

Follow for update