Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Rethinking Token Reduction for Large Vision-Language Models

About

Large Vision-Language Models (LVLMs) excel in visual understanding and reasoning, but the excessive visual tokens lead to high inference costs. Although recent token reduction methods mitigate this issue, they mainly target single-turn Visual Question Answering (VQA), leaving the more practical multi-turn VQA (MT-VQA) scenario largely unexplored. MT-VQA introduces additional challenges, as subsequent questions are unknown beforehand and may refer to arbitrary image regions, making existing reduction strategies ineffective. Specifically, current approaches fall into two categories: prompt-dependent methods, which bias toward the initial text prompt and discard information useful for subsequent turns; prompt-agnostic ones, which, though technically applicable to multi-turn settings, rely on heuristic reduction metrics such as attention scores, leading to suboptimal performance. In this paper, we propose a learning-based prompt-agnostic method, termed MetaCompress, overcoming the limitations of heuristic designs. We begin by formulating token reduction as a learnable compression mapping, unifying existing formats such as pruning and merging into a single learning objective. Upon this formulation, we introduce a data-efficient training paradigm capable of learning optimal compression mappings with limited computational costs. Extensive experiments on MT-VQA benchmarks and across multiple LVLM architectures demonstrate that MetaCompress achieves superior efficiency-accuracy trade-offs while maintaining strong generalization across dialogue turns. Our code is available at https://github.com/MArSha1147/MetaCompress.

Yi Wang, Haofei Zhang, Qihan Huang, Anda Cao, Gongfan Fang, Wei Wang, Xuan Jin, Jie Song, Mingli Song, Xinchao Wang• 2026

Related benchmarks

TaskDatasetResultRank
Multi-turn Visual Question AnsweringConvBench
S1 Score15.77
54
Multi-turn Visual Question AnsweringMT-GQA
Acc164.86
33
Multi-turn Visual Question AnsweringMT-VQA v2
Acc178.24
27
Multi-turn Visual Question AnsweringMT-GQA balanced (test-dev)
Acc160.78
27
Multi-turn Visual Question AnsweringMT-VQA v2 (val)
Acc174.62
27
Multi-turn Visual Question AnsweringMT-GQA
Time To First Token (TTFT)97.8
11
Multi-turn Video Question AnsweringMT-Video-MME
Accuracy 1 (Acc1)28.5
5
Showing 7 of 7 rows

Other info

Follow for update