Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

From Reusing to Forecasting: Accelerating Diffusion Models with TaylorSeers

About

Diffusion Transformers (DiT) have revolutionized high-fidelity image and video synthesis, yet their computational demands remain prohibitive for real-time applications. To solve this problem, feature caching has been proposed to accelerate diffusion models by caching the features in the previous timesteps and then reusing them in the following timesteps. However, at timesteps with significant intervals, the feature similarity in diffusion models decreases substantially, leading to a pronounced increase in errors introduced by feature caching, significantly harming the generation quality. To solve this problem, we propose TaylorSeer, which firstly shows that features of diffusion models at future timesteps can be predicted based on their values at previous timesteps. Based on the fact that features change slowly and continuously across timesteps, TaylorSeer employs a differential method to approximate the higher-order derivatives of features and predict features in future timesteps with Taylor series expansion. Extensive experiments demonstrate its significant effectiveness in both image and video synthesis, especially in high acceleration ratios. For instance, it achieves an almost lossless acceleration of 4.99$\times$ on FLUX and 5.00$\times$ on HunyuanVideo without additional training. On DiT, it achieves $3.41$ lower FID compared with previous SOTA at $4.53$$\times$ acceleration. %Our code is provided in the supplementary materials and will be made publicly available on GitHub. Our codes have been released in Github:https://github.com/Shenyi-Z/TaylorSeer

Jiacheng Liu, Chang Zou, Yuanhuiyi Lyu, Junjie Chen, Linfeng Zhang• 2025

Related benchmarks

TaskDatasetResultRank
Image GenerationImageNet 512x512 (val)
FID-50K3.51
219
Text-to-Image GenerationMS-COCO (val)
FID10.08
202
Class-conditional Image GenerationImageNet
FID2.55
158
Text-to-Image GenerationMJHQ-30K
Overall FID24.36
153
Class-conditional Image GenerationImageNet (val)
FID3.56
69
Text-to-Image GenerationPartiPrompts
ImageReward0.9813
67
Text-to-Image GenerationMS-COCO (30K)
FID (30K)29.66
62
Text-to-Image GenerationCOCO
FID34.74
61
Text-to-Image GenerationFLUX.1 (dev)
Image Reward0.9989
56
Text-to-Image GenerationDrawBench
Latency (s)6.48
48
Showing 10 of 63 rows

Other info

Follow for update