Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

KiToke: Kernel-based Interval-aware Token Compression for Video Large Language Models

About

Video Large Language Models (Video LLMs) achieve strong performance on video understanding tasks but suffer from high inference costs due to the large number of visual tokens. We propose KiToke, a training-free, query-agnostic token compression approach that reduces spatiotemporal redundancy while preserving critical visual information. Our method estimates token diversity globally using a kernel-based redundancy measure, enabling content-adaptive selection that remains effective under extreme token budgets, and further introduces a lightweight temporal interval construction with interval-aware token merging to maintain temporal coherence. Unlike prior methods that rely on local or segment-level heuristics, KiToke explicitly captures global redundancy across an entire video, leading to more efficient token utilization. Extensive experiments on multiple video understanding benchmarks and Video LLM backbones demonstrate that KiToke consistently outperforms existing training-free compression methods, with particularly large gains at aggressive retention ratios down to 1%.

Haifeng Huang, Yang Li• 2026

Related benchmarks

TaskDatasetResultRank
Video UnderstandingMVBench--
425
Video UnderstandingVideoMME
Score (Long)56
248
Long Video UnderstandingLongVideoBench
Score61.5
248
Video UnderstandingMLVU
Score66.5
221
Long Video UnderstandingMLVU
Score69.5
154
Video UnderstandingLongVideoBench
LongVideoBench Score57.4
92
Long Video UnderstandingLongVideo-Bench
Score58.2
89
Video UnderstandingVideo Benchmarks Aggregate
Average Score59.4
30
Video UnderstandingAggregated Average Score
Average Score61.4
26
Showing 9 of 9 rows

Other info

Follow for update