Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

TAP: A Token-Adaptive Predictor Framework for Training-Free Diffusion Acceleration

About

Diffusion models achieve strong generative performance but remain slow at inference due to the need for repeated full-model denoising passes. We present Token-Adaptive Predictor (TAP), a training-free, probe-driven framework that adaptively selects a predictor for each token at every sampling step. TAP uses a single full evaluation of the model's first layer as a low-cost probe to compute proxy losses for a compact family of candidate predictors (instantiated primarily with Taylor expansions of varying order and horizon), then assigns each token the predictor with the smallest proxy error. This per-token "probe-then-select" strategy exploits heterogeneous temporal dynamics, requires no additional training, and is compatible with various predictor designs. TAP incurs negligible overhead while enabling large speedups with little or no perceptual quality loss. Extensive experiments across multiple diffusion architectures and generation tasks show that TAP substantially improves the accuracy-efficiency frontier compared to fixed global predictors and caching-only baselines.

Haowei Zhu, Tingxuan Huang, Xing Wang, Tianyu Zhao, Jiexi Wang, Weifeng Chen, Xurui Peng, Fangmin Chen, Junhai Yong, Bin Wang• 2026

Related benchmarks

TaskDatasetResultRank
Text-to-Image GenerationQwen-Image
Latency (s)34.21
25
Text-to-Image GenerationDrawBench v1.0 (test)
Latency (s)2.55
22
Video GenerationVBench HunyuanVideo
VBench Score (%)65.46
8
Text-to-Image GenerationQwen-Image-Lightning Evaluation Set
Latency (s)4.8
7
Text-to-Image GenerationDrawBench
FID82.98
2
Showing 5 of 5 rows

Other info

Follow for update