KiToke: Kernel-based Interval-aware Token Compression for Video Large Language Models
About
Video Large Language Models (Video LLMs) achieve strong performance on video understanding tasks but suffer from high inference costs due to the large number of visual tokens. We propose KiToke, a training-free, query-agnostic token compression approach that reduces spatiotemporal redundancy while preserving critical visual information. Our method estimates token diversity globally using a kernel-based redundancy measure, enabling content-adaptive selection that remains effective under extreme token budgets, and further introduces a lightweight temporal interval construction with interval-aware token merging to maintain temporal coherence. Unlike prior methods that rely on local or segment-level heuristics, KiToke explicitly captures global redundancy across an entire video, leading to more efficient token utilization. Extensive experiments on multiple video understanding benchmarks and Video LLM backbones demonstrate that KiToke consistently outperforms existing training-free compression methods, with particularly large gains at aggressive retention ratios down to 1%.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Video Understanding | MVBench | -- | 425 | |
| Video Understanding | VideoMME | Score (Long)56 | 248 | |
| Long Video Understanding | LongVideoBench | Score61.5 | 248 | |
| Video Understanding | MLVU | Score66.5 | 221 | |
| Long Video Understanding | MLVU | Score69.5 | 154 | |
| Video Understanding | LongVideoBench | LongVideoBench Score57.4 | 92 | |
| Long Video Understanding | LongVideo-Bench | Score58.2 | 89 | |
| Video Understanding | Video Benchmarks Aggregate | Average Score59.4 | 30 | |
| Video Understanding | Aggregated Average Score | Average Score61.4 | 26 |