Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Diffusion Transformers with Representation Autoencoders

About

Latent generative modeling, where a pretrained autoencoder maps pixels into a latent space for the diffusion process, has become the standard strategy for Diffusion Transformers (DiT); however, the autoencoder component has barely evolved. Most DiTs continue to rely on the original VAE encoder, which introduces several limitations: outdated backbones that compromise architectural simplicity, low-dimensional latent spaces that restrict information capacity, and weak representations that result from purely reconstruction-based training and ultimately limit generative quality. In this work, we explore replacing the VAE with pretrained representation encoders (e.g., DINO, SigLIP, MAE) paired with trained decoders, forming what we term Representation Autoencoders (RAEs). These models provide both high-quality reconstructions and semantically rich latent spaces, while allowing for a scalable transformer-based architecture. Since these latent spaces are typically high-dimensional, a key challenge is enabling diffusion transformers to operate effectively within them. We analyze the sources of this difficulty, propose theoretically motivated solutions, and validate them empirically. Our approach achieves faster convergence without auxiliary representation alignment losses. Using a DiT variant equipped with a lightweight, wide DDT head, we achieve strong image generation results on ImageNet: 1.51 FID at 256x256 (no guidance) and 1.13 at both 256x256 and 512x512 (with guidance). RAE offers clear advantages and should be the new default for diffusion transformer training.

Boyang Zheng, Nanye Ma, Shengbang Tong, Saining Xie• 2025

Related benchmarks

TaskDatasetResultRank
Class-conditional Image GenerationImageNet 256x256
Inception Score (IS)309.4
441
Image GenerationImageNet 256x256 (val)
FID1.87
307
Class-conditional Image GenerationImageNet 256x256 (train)
IS262.9
305
Class-conditional Image GenerationImageNet 256x256 (val)
FID1.13
293
Text-to-Image GenerationGenEval
GenEval Score71.27
277
Image GenerationImageNet 256x256
FID1.13
243
Class-conditional Image GenerationImageNet 256x256 (train val)
FID1.28
178
Class-conditional Image GenerationImageNet 256x256 (test)
FID1.13
167
Class-conditional Image GenerationImageNet--
132
Image ReconstructionImageNet 256x256
rFID0.57
93
Showing 10 of 36 rows

Other info

Follow for update