Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Balancing Saliency and Coverage: Semantic Prominence-Aware Budgeting for Visual Token Compression in VLMs

About

Large Vision-Language Models (VLMs) achieve strong multimodal understanding capabilities by leveraging high-resolution visual inputs, but the resulting large number of visual tokens creates a major computational bottleneck. Recent work mitigates this issue through visual token compression, typically compressing tokens based on saliency, diversity, or a fixed combination of both. We observe that the distribution of semantic prominence varies substantially across samples, leading to different optimal trade-offs between local saliency preservation and global coverage. This observation suggests that applying a static compression strategy across all samples can be suboptimal. Motivated by this insight, we propose PromPrune, a sample-adaptive visual token selection framework composed of semantic prominence-aware budget allocation and a two-stage selection pipeline. Our method adaptively balances local saliency preservation and global coverage according to the semantic prominence distribution of each sample. By allocating token budgets between locally salient regions and globally diverse regions, our method maintains strong performance even under high compression ratios. On LLaVA-NeXT-7B, our approach reduces FLOPs by 88% and prefill latency by 22% while preserving 97.5% of the original accuracy.

Jaehoon Lee, Mingi Jung, Soohyuk Jang, Seungryong Yoo, Dahuin Jung, Sungroh Yoon• 2026

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVizWiz
Accuracy53.5
1525
Object Hallucination EvaluationPOPE
Accuracy86.7
1455
Visual Question AnsweringTextVQA
Accuracy56.2
1285
Visual Question AnsweringGQA
Accuracy58.6
1249
Multimodal UnderstandingMMBench
Accuracy63.1
637
Multimodal UnderstandingMM-Vet
MM-Vet Score33.1
531
Science Question AnsweringScienceQA (SQA)
Accuracy68.8
273
Multimodal UnderstandingMME
MME Score1.42e+3
207
Visual Question AnsweringVQA v2
Accuracy76.4
101
Multimodal UnderstandingPOPE
POPE Score0.818
90
Showing 10 of 15 rows

Other info

Follow for update