Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

The Model Knows Which Tokens Matter: Automatic Token Selection via Noise Gating

About

Visual tokens dominate inference cost in vision-language models (VLMs), yet many carry redundant information. Existing pruning methods alleviate this but typically rely on attention magnitude or similarity scores. We reformulate visual token pruning as capacity constrained communication: given a fixed budget K, the model must allocate limited bandwidth to maximally preserve visual information. We propose AutoSelect, which attaches a lightweight Scorer and Denoiser to a frozen VLM and trains with only the standard next token prediction loss, without auxiliary objectives or extra annotations. During training, a variance preserving noise gate modulates each token's information flow according to its predicted importance so that gradients propagate through all tokens; a diagonal attention Denoiser then recovers the perturbed representations. At inference, only the Scorer and a hard top-K selection remain, adding negligible latency. On ten VLM benchmarks, AutoSelect retains 96.5% of full model accuracy while accelerating LLM prefill by 2.85x with only 0.69 ms overhead, and transfers to different VLM backbones without architecture-specific tuning. Code is available at https://github.com/MedHK23/AutoSelect.

Landi He, Xiaoyu Yang, Lijian Xu• 2026

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVizWiz--
1525
Object Hallucination EvaluationPOPE--
1455
Science Question AnsweringScienceQA--
502
Visual Question AnsweringGQA
Score57.8
193
Multimodal BenchmarkingMMBench CN
Score57.5
129
Text-based Visual Question AnsweringTextVQA
Score55.3
112
Multimodal UnderstandingLLaVA Evaluation Suite 1.5
Average Score98.2
95
Multi-modal EvaluationMME
MME Score1.79e+3
89
Multimodal BenchmarkingMMBench (MMB)
MMB Score63.4
62
Visual Question AnsweringVQA v2
VQA-2 Score76.6
34
Showing 10 of 13 rows

Other info

Follow for update