The Model Knows Which Tokens Matter: Automatic Token Selection via Noise Gating
About
Visual tokens dominate inference cost in vision-language models (VLMs), yet many carry redundant information. Existing pruning methods alleviate this but typically rely on attention magnitude or similarity scores. We reformulate visual token pruning as capacity constrained communication: given a fixed budget K, the model must allocate limited bandwidth to maximally preserve visual information. We propose AutoSelect, which attaches a lightweight Scorer and Denoiser to a frozen VLM and trains with only the standard next token prediction loss, without auxiliary objectives or extra annotations. During training, a variance preserving noise gate modulates each token's information flow according to its predicted importance so that gradients propagate through all tokens; a diagonal attention Denoiser then recovers the perturbed representations. At inference, only the Scorer and a hard top-K selection remain, adding negligible latency. On ten VLM benchmarks, AutoSelect retains 96.5% of full model accuracy while accelerating LLM prefill by 2.85x with only 0.69 ms overhead, and transfers to different VLM backbones without architecture-specific tuning. Code is available at https://github.com/MedHK23/AutoSelect.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Visual Question Answering | VizWiz | -- | 1525 | |
| Object Hallucination Evaluation | POPE | -- | 1455 | |
| Science Question Answering | ScienceQA | -- | 502 | |
| Visual Question Answering | GQA | Score57.8 | 193 | |
| Multimodal Benchmarking | MMBench CN | Score57.5 | 129 | |
| Text-based Visual Question Answering | TextVQA | Score55.3 | 112 | |
| Multimodal Understanding | LLaVA Evaluation Suite 1.5 | Average Score98.2 | 95 | |
| Multi-modal Evaluation | MME | MME Score1.79e+3 | 89 | |
| Multimodal Benchmarking | MMBench (MMB) | MMB Score63.4 | 62 | |
| Visual Question Answering | VQA v2 | VQA-2 Score76.6 | 34 |