Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Head-Aware Visual Cropping: Enhancing Fine-Grained VQA with Attention-Guided Subimage

About

Multimodal Large Language Models (MLLMs) show strong performance in Visual Question Answering (VQA) but remain limited in fine-grained reasoning due to low-resolution inputs and noisy attention aggregation. We propose \textbf{Head Aware Visual Cropping (HAVC)}, a training-free method that improves visual grounding by leveraging a selectively refined subset of attention heads. HAVC first filters heads through an OCR-based diagnostic task, ensuring that only those with genuine grounding ability are retained. At inference, these heads are further refined using spatial entropy for stronger spatial concentration and gradient sensitivity for predictive contribution. The fused signals produce a reliable Visual Cropping Guidance Map, which highlights the most task-relevant region and guides the cropping of a subimage subsequently provided to the MLLM together with the image-question pair. Extensive experiments on multiple fine-grained VQA benchmarks demonstrate that HAVC consistently outperforms state-of-the-art cropping strategies, achieving more precise localization, stronger visual grounding, providing a simple yet effective strategy for enhancing precision in MLLMs.

Junfei Xie, Peng Pan, Xulong Zhang• 2026

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringGQA
Accuracy72.5
963
Visual Question AnsweringVQAv2
Accuracy77.78
177
Visual Question AnsweringA-OKVQA
Acc61.04
175
Visual Question AnsweringTextVQA
Accuracy57.6
79
Visual Question AnsweringPOPE
Accuracy85.8
71
Visual Question AnsweringV*
Accuracy49.73
10
Showing 6 of 6 rows

Other info

Follow for update