Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

FasterDiT: Towards Faster Diffusion Transformers Training without Architecture Modification

About

Diffusion Transformers (DiT) have attracted significant attention in research. However, they suffer from a slow convergence rate. In this paper, we aim to accelerate DiT training without any architectural modification. We identify the following issues in the training process: firstly, certain training strategies do not consistently perform well across different data. Secondly, the effectiveness of supervision at specific timesteps is limited. In response, we propose the following contributions: (1) We introduce a new perspective for interpreting the failure of the strategies. Specifically, we slightly extend the definition of Signal-to-Noise Ratio (SNR) and suggest observing the Probability Density Function (PDF) of SNR to understand the essence of the data robustness of the strategy. (2) We conduct numerous experiments and report over one hundred experimental results to empirically summarize a unified accelerating strategy from the perspective of PDF. (3) We develop a new supervision method that further accelerates the training process of DiT. Based on them, we propose FasterDiT, an exceedingly simple and practicable design strategy. With few lines of code modifications, it achieves 2.30 FID on ImageNet 256 resolution at 1000k iterations, which is comparable to DiT (2.27 FID) but 7 times faster in training.

Jingfeng Yao, Wang Cheng, Wenyu Liu, Xinggang Wang• 2024

Related benchmarks

TaskDatasetResultRank
Class-conditional Image GenerationImageNet 256x256
Inception Score (IS)264
441
Image GenerationImageNet 256x256 (val)
FID2.03
307
Image GenerationImageNet 256x256
FID2.03
243
Image ReconstructionImageNet 256x256
rFID0.61
93
Image ReconstructionImageNet 256x256 (val)
rFID0.61
36
Class-conditional Image GenerationImageNet-1K 256x256 1.0 (train)
gFID2.03
35
Class-to-image generationImageNet 256x256
FID7.91
15
Showing 7 of 7 rows

Other info

Follow for update