Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Rethinking Token-wise Feature Caching: Accelerating Diffusion Transformers with Dual Feature Caching

About

Diffusion Transformers (DiT) have become the dominant methods in image and video generation yet still suffer substantial computational costs. As an effective approach for DiT acceleration, feature caching methods are designed to cache the features of DiT in previous timesteps and reuse them in the next timesteps, allowing us to skip the computation in the next timesteps. Among them, token-wise feature caching has been introduced to perform different caching ratios for different tokens in DiTs, aiming to skip the computation for unimportant tokens while still computing the important ones. In this paper, we propose to carefully check the effectiveness in token-wise feature caching with the following two questions: (1) Is it really necessary to compute the so-called "important" tokens in each step? (2) Are so-called important tokens really important? Surprisingly, this paper gives some counter-intuition answers, demonstrating that consistently computing the selected ``important tokens'' in all steps is not necessary. The selection of the so-called ``important tokens'' is often ineffective, and even sometimes shows inferior performance than random selection. Based on these observations, this paper introduces dual feature caching referred to as DuCa, which performs aggressive caching strategy and conservative caching strategy iteratively and selects the tokens for computing randomly. Extensive experimental results demonstrate the effectiveness of our method in DiT, PixArt, FLUX, and OpenSora, demonstrating significant improvements than the previous token-wise feature caching.

Chang Zou, Evelyn Zhang, Runlin Guo, Haohang Xu, Conghui He, Xuming Hu, Linfeng Zhang• 2024

Related benchmarks

TaskDatasetResultRank
Class-conditional Image GenerationImageNet
FID2.59
158
Text-to-Image GenerationMJHQ-30K
Overall FID11.69
153
Class-conditional Image GenerationImageNet (val)
FID6.07
69
Text-to-Image GenerationPartiPrompts
ImageReward0.79
67
Text-to-Image GenerationMS-COCO (30K)
FID (30K)23.13
62
Text-to-Image GenerationMS COCO 2017
FID27.98
41
Text-to-Image GenerationImage Reward (Calibration)
Image Reward0.76
32
Text-to-Video GenerationHunyuanVideo
LPIPS0.454
22
Text-to-Video GenerationVBench--
10
Video Depth EstimationAether
Absolute Relative Error (Abs Rel)0.341
9
Showing 10 of 15 rows

Other info

Follow for update