Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

PPLLaVA: Varied Video Sequence Understanding With Prompt Guidance

About

In the past year, video-based large language models (Video LLMs) have achieved impressive progress, particularly in their ability to process long videos through extremely extended context lengths. However, this comes at the cost of significantly increased computational overhead due to the massive number of visual tokens, making efficiency a major bottleneck. In this paper, we identify the root of this inefficiency as the high redundancy in video content. To address this, we propose a novel pooling strategy that enables aggressive token compression while retaining instruction-relevant visual semantics. Our model, Prompt-guided Pooling LLaVA (PPLLaVA), introduces three key components: a CLIP-based visual-prompt alignment module that identifies regions of interest based on user instructions, a prompt-guided pooling mechanism that adaptively compresses the visual sequence using convolution-style pooling, and a clip context extension module tailored for processing long and complex prompts in visual dialogues. With up to 18x token reduction, PPLLaVA maintains strong performance across tasks, achieving state-of-the-art results on diverse video understanding benchmarks-ranging from image-to-video tasks such as captioning and QA to long-form video reasoning-while significantly improving inference throughput. Codes have been available at https://github.com/farewellthree/PPLLaVA.

Shangkun Sun, Ruyang Liu, Haoran Tang, Yixiao Ge, Haibo Lu, Jiankun Yang, Chen Li• 2024

Related benchmarks

TaskDatasetResultRank
Object Hallucination EvaluationPOPE--
1455
Multimodal EvaluationMME--
658
Video Question AnsweringMSRVTT-QA
Accuracy64.3
491
Mathematical ReasoningMathVista
Score34.6
385
Video Question AnsweringActivityNet-QA
Accuracy60.7
376
Video Question AnsweringMSVD-QA
Accuracy77.1
360
Multimodal Model EvaluationMMBench Chinese
Accuracy62
154
Multimodal UnderstandingMMMU (val)--
152
Multimodal BenchmarkingMMBench English
Accuracy68.9
125
Multimodal UnderstandingSEED-Bench Image
Accuracy70.7
121
Showing 10 of 19 rows

Other info

Code

Follow for update