ElasticDiffusion: Training-free Arbitrary Size Image Generation through Global-Local Content Separation
About
Diffusion models have revolutionized image generation in recent years, yet they are still limited to a few sizes and aspect ratios. We propose ElasticDiffusion, a novel training-free decoding method that enables pretrained text-to-image diffusion models to generate images with various sizes. ElasticDiffusion attempts to decouple the generation trajectory of a pretrained model into local and global signals. The local signal controls low-level pixel information and can be estimated on local patches, while the global signal is used to maintain overall structural consistency and is estimated with a reference image. We test our method on CelebA-HQ (faces) and LAION-COCO (objects/indoor/outdoor scenes). Our experiments and qualitative results show superior image coherence quality across aspect ratios compared to MultiDiffusion and the standard decoding strategy of Stable Diffusion. Project page: https://elasticdiffusion.github.io/
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Text-to-Image Synthesis | CelebA-HQ (test) | FID225.9 | 19 | |
| Image Generation | LAION-COCO Horizontal | FID22.85 | 18 | |
| Image Generation | LAION-COCO Vertical | FID15.5 | 18 | |
| Text-to-Image Generation | LAION-COCO | FID23.77 | 13 |