Are We Using the Right Benchmark: An Evaluation Framework for Visual Token Compression Methods
About
Recent efforts to accelerate inference in Multimodal Large Language Models (MLLMs) have largely focused on visual token compression. The effectiveness of these methods is commonly evaluated by measuring the accuracy drop on existing MLLM benchmarks before and after compression. However, these benchmarks are originally designed to assess general perception and reasoning abilities, rather than the specific challenges posed by visual token compression, leading to a fundamental task mismatch. In this work, we uncover a counterintuitive yet consistent phenomenon: simple image downsampling outperforms many advanced visual token compression methods across multiple widely used benchmarks. Through a comprehensive empirical study spanning eight popular benchmarks and multiple state-of-the-art compression techniques, we show that (i) current benchmarks contain substantial noise (task-irrelevant samples) for evaluating visual token compression, and (ii) downsampling can act as an effective data filter that distinguishes between simple and difficult samples with respect to compression sensitivity. Motivated by these findings, we propose VTC-Bench, an evaluation framework that explicitly leverages downsampling as a discriminator to denoise existing benchmarks, enabling a fairer and more meaningful additional assessment of visual token compression methods.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Object Hallucination Evaluation | POPE | Accuracy86.1 | 1455 | |
| Visual Question Answering | TextVQA | Accuracy59.5 | 1285 | |
| Multimodal Evaluation | MME | Score62.5 | 658 | |
| Visual Question Answering | GQA | Accuracy57.4 | 505 | |
| Visual Question Answering | ChartQA | Accuracy49.3 | 371 | |
| OCR Evaluation | OCRBench | Score47.9 | 329 | |
| Visual Question Answering | AI2D | Accuracy60 | 249 | |
| Diagram Question Answering | AI2D | AI2D Accuracy56.7 | 232 | |
| Visual Question Answering | GQA | Mean Accuracy58.4 | 196 | |
| Visual Question Answering | RealworldQA | Accuracy53.6 | 179 |