Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Reconstruction vs. Generation: Taming Optimization Dilemma in Latent Diffusion Models

About

Latent diffusion models with Transformer architectures excel at generating high-fidelity images. However, recent studies reveal an optimization dilemma in this two-stage design: while increasing the per-token feature dimension in visual tokenizers improves reconstruction quality, it requires substantially larger diffusion models and more training iterations to achieve comparable generation performance. Consequently, existing systems often settle for sub-optimal solutions, either producing visual artifacts due to information loss within tokenizers or failing to converge fully due to expensive computation costs. We argue that this dilemma stems from the inherent difficulty in learning unconstrained high-dimensional latent spaces. To address this, we propose aligning the latent space with pre-trained vision foundation models when training the visual tokenizers. Our proposed VA-VAE (Vision foundation model Aligned Variational AutoEncoder) significantly expands the reconstruction-generation frontier of latent diffusion models, enabling faster convergence of Diffusion Transformers (DiT) in high-dimensional latent spaces. To exploit the full potential of VA-VAE, we build an enhanced DiT baseline with improved training strategies and architecture designs, termed LightningDiT. The integrated system achieves state-of-the-art (SOTA) performance on ImageNet 256x256 generation with an FID score of 1.35 while demonstrating remarkable training efficiency by reaching an FID score of 2.11 in just 64 epochs--representing an over 21 times convergence speedup compared to the original DiT. Models and codes are available at: https://github.com/hustvl/LightningDiT.

Jingfeng Yao, Bin Yang, Xinggang Wang• 2025

Related benchmarks

TaskDatasetResultRank
Class-conditional Image GenerationImageNet 256x256
Inception Score (IS)295.3
441
Image GenerationImageNet 256x256 (val)
FID5.3
307
Class-conditional Image GenerationImageNet 256x256 (train)
IS295.3
305
Class-conditional Image GenerationImageNet 256x256 (val)
FID1.35
293
Text-to-Image GenerationGenEval
GenEval Score76.16
277
Image GenerationImageNet 256x256
FID1.35
243
Image GenerationImageNet (val)
FID2.86
198
Class-conditional Image GenerationImageNet 256x256 (train val)
FID1.35
178
Text-to-Image GenerationGenEval (test)
Two Obj. Acc33.8
169
Class-conditional Image GenerationImageNet 256x256 (test)
FID1.28
167
Showing 10 of 37 rows

Other info

Code

Follow for update