Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Fast High-Resolution Image Synthesis with Latent Adversarial Diffusion Distillation

About

Diffusion models are the main driver of progress in image and video synthesis, but suffer from slow inference speed. Distillation methods, like the recently introduced adversarial diffusion distillation (ADD) aim to shift the model from many-shot to single-step inference, albeit at the cost of expensive and difficult optimization due to its reliance on a fixed pretrained DINOv2 discriminator. We introduce Latent Adversarial Diffusion Distillation (LADD), a novel distillation approach overcoming the limitations of ADD. In contrast to pixel-based ADD, LADD utilizes generative features from pretrained latent diffusion models. This approach simplifies training and enhances performance, enabling high-resolution multi-aspect ratio image synthesis. We apply LADD to Stable Diffusion 3 (8B) to obtain SD3-Turbo, a fast model that matches the performance of state-of-the-art text-to-image generators using only four unguided sampling steps. Moreover, we systematically investigate its scaling behavior and demonstrate LADD's effectiveness in various applications such as image editing and inpainting.

Axel Sauer, Frederic Boesel, Tim Dockhorn, Andreas Blattmann, Patrick Esser, Robin Rombach• 2024

Related benchmarks

TaskDatasetResultRank
Text-to-Image GenerationGenEval (test)--
221
Video GenerationUCF-101 (test)--
105
Text-to-Image GenerationMS-COCO 2017 (val)
FID26.04
100
Text-to-Image GenerationGenEval 1.0 (test)
Overall Score71.94
85
Text-to-Image GenerationTIFA
TIFA77.9
28
Text to ImageMJHQ 30K (test)
PS (Perceptual Score)21.7
18
Text to ImageCOCO 30K (test)
PS22.8
18
Text-to-Image GenerationCOCO-10K
CLIP Score0.3161
16
Text-to-Image GenerationShareGPT-4o-Image SD3.5-Large
CLIP Score35.048
3
Showing 9 of 9 rows

Other info

Follow for update