Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Beyond Static Cropping: Layer-Adaptive Visual Localization and Decoding Enhancement

About

Large Vision-Language Models (LVLMs) have advanced rapidly by aligning visual patches with the text embedding space, but a fixed visual-token budget forces images to be resized to a uniform pretraining resolution, often erasing fine-grained details and causing hallucinations via over-reliance on language priors. Recent attention-guided enhancement (e.g., cropping or region-focused attention allocation) alleviates this, yet it commonly hinges on a static "magic layer" empirically chosen on simple recognition benchmarks and thus may not transfer to complex reasoning tasks. In contrast to this static assumption, we propose a dynamic perspective on visual grounding. Through a layer-wise sensitivity analysis, we demonstrate that visual grounding is a dynamic process: while simple object recognition tasks rely on middle layers, complex visual search and reasoning tasks require visual information to be reactivated at deeper layers. Based on this observation, we introduce Visual Activation by Query (VAQ), a metric that identifies the layer whose attention map is most relevant to query-specific visual grounding by measuring attention sensitivity to the input query. Building on VAQ, we further propose LASER (Layer-adaptive Attention-guided Selective visual and decoding Enhancement for Reasoning), a training-free inference procedure that adaptively selects task-appropriate layers for visual localization and question answering. Experiments across diverse VQA benchmarks show that LASER significantly improves VQA accuracy across tasks with varying levels of complexity.

Zipeng Zhu, Zhanghao Hu, Qinglin Zhu, Yuxi Hong, Yijun Liu, Jingyong Su, Yulan He, Lin Gui• 2026

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringTextVQA
Accuracy65.36
1117
Direct Answer Visual Question AnsweringA-OKVQA
Accuracy62.82
18
Visual Question AnsweringPOPE (Random)
Accuracy89.86
8
Visual Question AnsweringPOPE Popular
Accuracy88.26
8
Visual Question AnsweringPOPE Adversarial
Accuracy85.86
8
LocalizationRefCOCO (test)
Attention Aggregation Ratio41.77
6
LocalizationRefCOCO+ (test)
Attention Aggregation Ratio38.44
6
LocalizationRefCOCOg (test)
Attention Aggregation Ratio31.92
6
Showing 8 of 8 rows

Other info

Follow for update