Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

GLIGEN: Open-Set Grounded Text-to-Image Generation

About

Large-scale text-to-image diffusion models have made amazing advances. However, the status quo is to use text input alone, which can impede controllability. In this work, we propose GLIGEN, Grounded-Language-to-Image Generation, a novel approach that builds upon and extends the functionality of existing pre-trained text-to-image diffusion models by enabling them to also be conditioned on grounding inputs. To preserve the vast concept knowledge of the pre-trained model, we freeze all of its weights and inject the grounding information into new trainable layers via a gated mechanism. Our model achieves open-world grounded text2img generation with caption and bounding box condition inputs, and the grounding ability generalizes well to novel spatial configurations and concepts. GLIGEN's zero-shot performance on COCO and LVIS outperforms that of existing supervised layout-to-image baselines by a large margin.

Yuheng Li, Haotian Liu, Qingyang Wu, Fangzhou Mu, Jianwei Yang, Jianfeng Gao, Chunyuan Li, Yong Jae Lee• 2023

Related benchmarks

TaskDatasetResultRank
Human-Object Interaction DetectionHICO-DET (test)--
493
Object DetectionMS-COCO
AP36.8
77
SegmentationADE20K
mIoU23.78
52
Text-to-Image GenerationCOCO 30k subset 2014 (val)
FID21.04
46
Grounded Text-to-Image GenerationCOCO 2014 (val)
FID5.61
26
Object DetectionnuImages
mAP36.3
20
Medical Image GenerationMIMIC-CXR
FID12.49
19
Sketch-to-imageSketchy (test)
FID1.1
17
Sketch-to-Image GenerationSketchy In the Wild
FID1.57
17
Layout-to-Image GenerationCOCO 2017 (val)
FID21.04
14
Showing 10 of 85 rows
...

Other info

Code

Follow for update