Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

SVG-T2I: Scaling Up Text-to-Image Latent Diffusion Model Without Variational Autoencoder

About

Visual generation grounded in Visual Foundation Model (VFM) representations offers a highly promising unified pathway for integrating visual understanding, perception, and generation. Despite this potential, training large-scale text-to-image diffusion models entirely within the VFM representation space remains largely unexplored. To bridge this gap, we scale the SVG (Self-supervised representations for Visual Generation) framework, proposing SVG-T2I to support high-quality text-to-image synthesis directly in the VFM feature domain. By leveraging a standard text-to-image diffusion pipeline, SVG-T2I achieves competitive performance, reaching 0.75 on GenEval and 85.78 on DPG-Bench. This performance validates the intrinsic representational power of VFMs for generative tasks. We fully open-source the project, including the autoencoder and generation model, together with their training, inference, evaluation pipelines, and pre-trained weights, to facilitate further research in representation-driven visual generation.

Minglei Shi, Haolin Wang, Borui Zhang, Wenzhao Zheng, Bohan Zeng, Ziyang Yuan, Xiaoshi Wu, Yuanxing Zhang, Huan Yang, Xintao Wang, Pengfei Wan, Kun Gai, Jie Zhou, Jiwen Lu• 2025

Related benchmarks

TaskDatasetResultRank
Class-conditional Image GenerationImageNet 256x256 (train)
IS264.9
345
Text-to-Image GenerationGenEval (test)
Two Obj. Acc0.89
221
Text-to-Image GenerationDPG (test)
Entity Fidelity91
16
Image ReconstructionImageNet-1K 512 x 512 (test)
FID1.994
9
Image TokenizationImageNet-1K 512 x 512 (test)
FID1.994
9
Showing 5 of 5 rows

Other info

GitHub

Follow for update