Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Entropy-Gradient Grounding: Training-Free Evidence Retrieval in Vision-Language Models

About

Despite rapid progress, pretrained vision-language models still struggle when answers depend on tiny visual details or on combining clues spread across multiple regions, as in documents and compositional queries. We address this by framing grounding as test-time evidence retrieval: given a query, the model should actively identify where to look next to resolve ambiguity. To this end, we propose a training-free, model-intrinsic grounding method that uses uncertainty as supervision. Specifically, we compute the entropy of the model's next-token distribution and backpropagate it to the visual token embeddings to obtain an entropy-gradient relevance map, without auxiliary detectors or attention-map heuristics. We then extract and rank multiple coherent regions to support multi-evidence queries, and introduce an iterative zoom-and-reground procedure with a spatial-entropy stopping rule to avoid over-refinement. Experiments on seven benchmarks across four VLM architectures demonstrate consistent improvements over existing methods, with the largest gains on detail-critical and high-resolution settings, while also producing more interpretable evidence localizations.

Marcel Gr\"opl, Jaewoo Jung, Seungryong Kim, Marc Pollefeys, Sunghwan Hong• 2026

Related benchmarks

TaskDatasetResultRank
Object Hallucination EvaluationPOPE
Accuracy89.31
1455
Document Visual Question AnsweringDocVQA
ANLS91.16
263
Visual ReasoningGQA
Accuracy63.97
93
Visual Question AnsweringTextVQA v1.0 (test)
Accuracy81.45
40
High-resolution Visual SearchV*
Top-1 Accuracy86.91
13
Infographic Visual Question AnsweringInfoQA
ANLS73.43
11
Real-world Spatial UnderstandingRWQA
Top-1 Accuracy66.93
10
Showing 7 of 7 rows

Other info

Follow for update