Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Learning Adaptive and Temporally Causal Video Tokenization in a 1D Latent Space

About

We propose AdapTok, an adaptive temporal causal video tokenizer that can flexibly allocate tokens for different frames based on video content. AdapTok is equipped with a block-wise masking strategy that randomly drops tail tokens of each block during training, and a block causal scorer to predict the reconstruction quality of video frames using different numbers of tokens. During inference, an adaptive token allocation strategy based on integer linear programming is further proposed to adjust token usage given predicted scores. Such design allows for sample-wise, content-aware, and temporally dynamic token allocation under a controllable overall budget. Extensive experiments for video reconstruction and generation on UCF-101 and Kinetics-600 demonstrate the effectiveness of our approach. Without additional image data, AdapTok consistently improves reconstruction quality and generation performance under different token budgets, allowing for more scalable and token-efficient generative video modeling.

Yan Li, Changyao Tian, Renqiu Xia, Ning Liao, Weiwei Guo, Junchi Yan, Hongsheng Li, Jifeng Dai, Hao Li, Xue Yang• 2025

Related benchmarks

TaskDatasetResultRank
Video GenerationUCF-101 (test)--
105
Video ReconstructionUCF-101
rFVD36
28
Video ReconstructionUCF-101 (test)
rFVD28
17
Video GenerationKinetics-600 (test)
gFVD11
7
Showing 4 of 4 rows

Other info

Follow for update