Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Accelerating Diffusion Model Training under Minimal Budgets: A Condensation-Based Perspective

About

Diffusion models have achieved remarkable performance on a wide range of generative tasks, yet training them from scratch is notoriously resource-intensive, typically requiring millions of training images and many GPU days. Motivated by a data-centric view of this bottleneck, we adopt a condensation-based perspective: given a large training set, the goal is to construct a much smaller condensed dataset that still supports training strong diffusion models under minimal data and compute budgets. To operationalize this perspective, we introduce Diffusion Dataset Condensation (D2C), a two-phase framework comprising Select and Attach. In the Select phase, a diffusion difficulty score combined with interval sampling is used to identify a compact, informative training subset from the original data. Building on this subset, the Attach phase further strengthens the conditional signals by augmenting each selected image with rich semantic and visual representations. To our knowledge, D2C is the first framework that systematically investigates dataset condensation for diffusion models, whereas prior condensation methods have mainly targeted discriminative architectures. Extensive experiments across data budgets (0.8%-8% of ImageNet), model architectures, and image resolutions demonstrate that D2C dramatically accelerates diffusion model training while preserving high generative quality. On ImageNet 256x256 with SiT-XL/2, D2C attains an FID of 4.3 in just 40k steps using only 0.8% of the training images, corresponding to about 233x and 100x faster training than vanilla SiT-XL/2 and SiT-XL/2 + REPA, respectively.

Rui Huang, Shitong Shao, Zikai Zhou, Pukun Zhao, Hangyu Guo, Tian Ye, Lichen Bai, Shuo Yang, Zeke Xie• 2025

Related benchmarks

TaskDatasetResultRank
Image GenerationImageNet 512x512 (val)--
219
Image GenerationCIFAR-10
FID3.95
203
Image GenerationImageNet 256x256 (train)--
164
Image GenerationImageNet-1K
FID2.78
55
Image GenerationImageNet 256x256 0.8% budget (10K) (train)
gFID3.9
6
Showing 5 of 5 rows

Other info

Follow for update