Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

FlashVID: Efficient Video Large Language Models via Training-free Tree-based Spatiotemporal Token Merging

About

Although Video Large Language Models (VLLMs) have shown remarkable capabilities in video understanding, they are required to process high volumes of visual tokens, causing significant computational inefficiency. Existing VLLMs acceleration frameworks usually compress spatial and temporal redundancy independently, which overlooks the spatiotemporal relationships, thereby leading to suboptimal spatiotemporal compression. The highly correlated visual features are likely to change in spatial position, scale, orientation, and other attributes over time due to the dynamic nature of video. Building on this insight, we introduce FlashVID, a training-free inference acceleration framework for VLLMs. Specifically, FlashVID utilizes Attention and Diversity-based Token Selection (ADTS) to select the most representative tokens for basic video representation, then applies Tree-based Spatiotemporal Token Merging (TSTM) for fine-grained spatiotemporal redundancy elimination. Extensive experiments conducted on three representative VLLMs across five video understanding benchmarks demonstrate the effectiveness and generalization of our method. Notably, by retaining only 10% of visual tokens, FlashVID preserves 99.1% of the performance of LLaVA-OneVision. Consequently, FlashVID can serve as a training-free and plug-and-play module for extending long video frames, which enables a 10x increase in video frame input to Qwen2.5-VL, resulting in a relative improvement of 8.6% within the same computational budget. Code is available at https://github.com/Fanziyang-v/FlashVID.

Ziyang Fan, Keyu Chen, Ruilong Xing, Yulin Li, Li Jiang, Zhuotao Tian• 2026

Related benchmarks

TaskDatasetResultRank
Video UnderstandingMVBench--
247
Video UnderstandingVideoMME
Score (Short)74.2
127
Long Video UnderstandingLongVideoBench
Score59.1
110
Video UnderstandingLongVideoBench
LongVideoBench Score58.1
79
Video UnderstandingEgoSchema--
49
Video UnderstandingVideoMME, EgoSchema, LongVideoBench, MVBench
Avg. Score60.5
48
Egocentric Video UnderstandingEgoSchema
Subset Accuracy63.4
39
Multi-modal Video UnderstandingMVBench
Score67.1
39
Video UnderstandingLLaVA-Video Benchmark Suite Aggregate
Score59.6
9
Showing 9 of 9 rows

Other info

Follow for update