Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Controllable Text-to-Image Generation

About

In this paper, we propose a novel controllable text-to-image generative adversarial network (ControlGAN), which can effectively synthesise high-quality images and also control parts of the image generation according to natural language descriptions. To achieve this, we introduce a word-level spatial and channel-wise attention-driven generator that can disentangle different visual attributes, and allow the model to focus on generating and manipulating subregions corresponding to the most relevant words. Also, a word-level discriminator is proposed to provide fine-grained supervisory feedback by correlating words with image regions, facilitating training an effective generator which is able to manipulate specific visual attributes without affecting the generation of other content. Furthermore, perceptual loss is adopted to reduce the randomness involved in the image generation, and to encourage the generator to manipulate specific attributes required in the modified text. Extensive experiments on benchmark datasets demonstrate that our method outperforms existing state of the art, and is able to effectively manipulate synthetic images using natural language descriptions. Code is available at https://github.com/mrlibw/ControlGAN.

Bowen Li, Xiaojuan Qi, Thomas Lukasiewicz, Philip H. S. Torr• 2019

Related benchmarks

TaskDatasetResultRank
Text-to-Image SynthesisMS-COCO (val)--
35
Text-to-Image GenerationMulti-modal CelebA-HQ
FID116.3
19
Facial Image GenerationDISFA
FID7.756
11
Facial Image GenerationBP4D
FID9.619
11
Text-to-Image SynthesisMM-CelebA-HQ 256x256
FID116.3
7
Text-to-Image GenerationCUB-200-2011 (test)
Inception Score4.58
3
Text-to-Image GenerationCOCO (val)
IS24.06
3
Showing 7 of 7 rows

Other info

Code

Follow for update