Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

MMTok: Multimodal Coverage Maximization for Efficient Inference of VLMs

About

Vision-Language Models (VLMs) demonstrate impressive performance in understanding visual content with language instruction by converting visual inputs to vision tokens. However, redundancy in vision tokens results in the degraded inference efficiency of VLMs. While many algorithms have been proposed to reduce the number of vision tokens, most of them apply only unimodal information (i.e., vision/text) for pruning and ignore the inherent multimodal property of vision-language tasks. Moreover, it lacks a generic criterion that can be applied to different modalities. To mitigate this limitation, in this work, we propose to leverage both vision and text tokens to select informative vision tokens by the coverage criterion. We first formulate the subset selection problem as a maximum coverage problem. Afterwards, a subset of vision tokens is optimized to cover the text tokens and the original set of vision tokens, simultaneously. The proposed method MMTok is extensively evaluated on benchmark datasets with different VLMs. The comparison illustrates that vision and text information are complementary, and combining multimodal information can surpass the unimodal baseline with a clear margin. Moreover, under the maximum coverage criterion on the POPE dataset, our method achieves a 1.87x speedup while maintaining 98.7% of the original performance on LLaVA-NeXT-13B. Finally, with only four vision tokens, 87.7% of the original performance is still preserved on LLaVA-1.5-7B. These results highlight the effectiveness of coverage in token selection. The code is available at https://github.com/Ironieser/mmtok

Sixun Dong, Juhua Hu, Mian Zhang, Ming Yin, Yanjie Fu, Qi Qian• 2025

Related benchmarks

TaskDatasetResultRank
Object Hallucination EvaluationPOPE--
1455
Visual Question AnsweringVQA v2--
1362
Visual Question AnsweringTextVQA
Accuracy59.64
1285
Text-based Visual Question AnsweringTextVQA
Accuracy70.49
807
Multimodal EvaluationMME--
658
Multimodal UnderstandingMMBench
Accuracy79.3
637
Science Question AnsweringScienceQA
Accuracy81.61
502
Multimodal UnderstandingSEED-Bench
Accuracy59.21
343
OCR EvaluationOCRBench
Score59.6
329
Multi-discipline Multimodal UnderstandingMMMU
Accuracy38
317
Showing 10 of 30 rows

Other info

Follow for update