Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

PruneVid: Visual Token Pruning for Efficient Video Large Language Models

About

In this paper, we introduce PruneVid, a visual token pruning method designed to enhance the efficiency of multi-modal video understanding. Large Language Models (LLMs) have shown promising performance in video tasks due to their extended capabilities in comprehending visual modalities. However, the substantial redundancy in video data presents significant computational challenges for LLMs. To address this issue, we introduce a training-free method that 1) minimizes video redundancy by merging spatial-temporal tokens, and 2) leverages LLMs' reasoning capabilities to selectively prune visual features relevant to question tokens, enhancing model efficiency. We validate our method across multiple video benchmarks, which demonstrate that PruneVid can prune over 80% of tokens while maintaining competitive performance combined with different model networks. This highlights its superior effectiveness and efficiency compared to existing pruning methods. Code: https://github.com/Visual-AI/PruneVid.

Xiaohu Huang, Hao Zhou, Kai Han• 2024

Related benchmarks

TaskDatasetResultRank
Video UnderstandingVideoMME
Overall Score58
192
Video UnderstandingVideoMME
Score (Short)67.3
127
Long Video UnderstandingLongVideoBench
Score55.4
110
Video UnderstandingLongVideoBench, MLVU, and VideoMME Aggregate
Average Score55.1
75
Video UnderstandingMLVU 3-120min (test)
Accuracy45.7
49
Video UnderstandingLongVideoBench 1-60min
Accuracy56.1
49
Video UnderstandingMLVU 3-120min (dev)
Accuracy61.4
49
Video UnderstandingVideoMME, EgoSchema, LongVideoBench, MVBench
Avg. Score57
48
Egocentric Video UnderstandingEgoSchema
Subset Accuracy63.2
39
Multi-modal Video UnderstandingMVBench
Score56.8
39
Showing 10 of 10 rows

Other info

Follow for update