Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

FastCache: Fast Caching for Diffusion Transformer Through Learnable Linear Approximation

About

Diffusion Transformers (DiT) are powerful generative models but remain computationally intensive due to their iterative structure and deep transformer stacks. To alleviate this inefficiency, we propose \textbf{FastCache}, a hidden-state-level caching and compression framework that accelerates DiT inference by exploiting redundancy within the model's internal representations. FastCache introduces a dual strategy: (1) a spatial-aware token selection mechanism that adaptively filters redundant tokens based on hidden-state saliency, and (2) a transformer-level cache that reuses latent activations across timesteps when changes fall below a predefined threshold. These modules work jointly to reduce unnecessary computation while preserving generation fidelity through learnable linear approximation. Theoretical analysis shows that FastCache maintains bounded approximation error under a hypothesis-testing-based decision rule. Empirical evaluations across multiple DiT variants demonstrate substantial reductions in latency and memory usage, achieving the best generation quality among existing cache methods, as measured by FID and t-FID. To further improve the speedup of FastCache, we also introduce a token merging module that merges redundant tokens based on k-NN density. Code is available at \href{https://github.com/NoakLiu/FastCache-xDiT}{https://github.com/NoakLiu/FastCache-xDiT}.

Dong Liu, Yanxuan Yu, Jiayi Zhang, Yifan Li, Ben Lengerich, Ying Nian Wu• 2025

Related benchmarks

TaskDatasetResultRank
Text-to-Image GenerationMS-COCO
FID7.7
131
Video GenerationImage and Video Generation
FID4.46
20
Image GenerationImageNet-256 (test)
FID4.46
11
Image GenerationFLUX.1 Schnell
Time (s)0.94
5
Video GenerationDiT-XL 2
FID4.46
5
Text-to-Image GenerationDrawBench
FID5.74
4
Showing 6 of 6 rows

Other info

Follow for update