GLIGEN: Open-Set Grounded Text-to-Image Generation
About
Large-scale text-to-image diffusion models have made amazing advances. However, the status quo is to use text input alone, which can impede controllability. In this work, we propose GLIGEN, Grounded-Language-to-Image Generation, a novel approach that builds upon and extends the functionality of existing pre-trained text-to-image diffusion models by enabling them to also be conditioned on grounding inputs. To preserve the vast concept knowledge of the pre-trained model, we freeze all of its weights and inject the grounding information into new trainable layers via a gated mechanism. Our model achieves open-world grounded text2img generation with caption and bounding box condition inputs, and the grounding ability generalizes well to novel spatial configurations and concepts. GLIGEN's zero-shot performance on COCO and LVIS outperforms that of existing supervised layout-to-image baselines by a large margin.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Human-Object Interaction Detection | HICO-DET (test) | -- | 493 | |
| Object Detection | MS-COCO | AP36.8 | 77 | |
| Segmentation | ADE20K | mIoU23.78 | 52 | |
| Text-to-Image Generation | COCO 30k subset 2014 (val) | FID21.04 | 46 | |
| Grounded Text-to-Image Generation | COCO 2014 (val) | FID5.61 | 26 | |
| Object Detection | nuImages | mAP36.3 | 20 | |
| Medical Image Generation | MIMIC-CXR | FID12.49 | 19 | |
| Sketch-to-image | Sketchy (test) | FID1.1 | 17 | |
| Sketch-to-Image Generation | Sketchy In the Wild | FID1.57 | 17 | |
| Layout-to-Image Generation | COCO 2017 (val) | FID21.04 | 14 |