PPLLaVA: Varied Video Sequence Understanding With Prompt Guidance
About
In the past year, video-based large language models (Video LLMs) have achieved impressive progress, particularly in their ability to process long videos through extremely extended context lengths. However, this comes at the cost of significantly increased computational overhead due to the massive number of visual tokens, making efficiency a major bottleneck. In this paper, we identify the root of this inefficiency as the high redundancy in video content. To address this, we propose a novel pooling strategy that enables aggressive token compression while retaining instruction-relevant visual semantics. Our model, Prompt-guided Pooling LLaVA (PPLLaVA), introduces three key components: a CLIP-based visual-prompt alignment module that identifies regions of interest based on user instructions, a prompt-guided pooling mechanism that adaptively compresses the visual sequence using convolution-style pooling, and a clip context extension module tailored for processing long and complex prompts in visual dialogues. With up to 18x token reduction, PPLLaVA maintains strong performance across tasks, achieving state-of-the-art results on diverse video understanding benchmarks-ranging from image-to-video tasks such as captioning and QA to long-form video reasoning-while significantly improving inference throughput. Codes have been available at https://github.com/farewellthree/PPLLaVA.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Object Hallucination Evaluation | POPE | -- | 1455 | |
| Multimodal Evaluation | MME | -- | 658 | |
| Video Question Answering | MSRVTT-QA | Accuracy64.3 | 491 | |
| Mathematical Reasoning | MathVista | Score34.6 | 385 | |
| Video Question Answering | ActivityNet-QA | Accuracy60.7 | 376 | |
| Video Question Answering | MSVD-QA | Accuracy77.1 | 360 | |
| Multimodal Model Evaluation | MMBench Chinese | Accuracy62 | 154 | |
| Multimodal Understanding | MMMU (val) | -- | 152 | |
| Multimodal Benchmarking | MMBench English | Accuracy68.9 | 125 | |
| Multimodal Understanding | SEED-Bench Image | Accuracy70.7 | 121 |