VQGAN-CLIP: Open Domain Image Generation and Editing with Natural Language Guidance
About
Generating and editing images from open domain text prompts is a challenging task that heretofore has required expensive and specially trained models. We demonstrate a novel methodology for both tasks which is capable of producing images of high visual quality from text prompts of significant semantic complexity without any training by using a multimodal encoder to guide image generations. We demonstrate on a variety of tasks how using CLIP [37] to guide VQGAN [11] produces higher visual quality outputs than prior, less flexible approaches like DALL-E [38], GLIDE [33] and Open-Edit [24], despite not being trained for the tasks presented. Our code is available in a public repository.
Katherine Crowson, Stella Biderman, Daniel Kornis, Dashiell Stander, Eric Hallahan, Louis Castricato, Edward Raff• 2022
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Longitudinal Brain MRI Synthesis | ADNI (test) | SSIM0.7463 | 8 | |
| Longitudinal Brain MRI Synthesis | Brain MRI 0 ≤ Δt < 12 (test) | SSIM0.7553 | 7 | |
| Longitudinal Brain MRI Synthesis | Brain MRI 12 ≤ Δt < 24 months (test) | SSIM73.41 | 7 | |
| Longitudinal Brain MRI Synthesis | Brain MRI 24 ≤ Δt < 36 (test) | SSIM73.03 | 7 | |
| Longitudinal Brain MRI Synthesis | Brain MRI Δt ≥ 36 (test) | SSIM0.7327 | 7 | |
| Image Editing | Real Images | Editing Time1 | 5 |
Showing 6 of 6 rows