Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

ConFoThinking: Consolidated Focused Attention Driven Thinking for Visual Question Answering

About

Thinking with Images improves fine-grained VQA for MLLMs by emphasizing visual cues. However, tool-augmented methods depend on the capacity of grounding, which remains unreliable for MLLMs. In parallel, attention-driven methods to crop the Region of Interest (ROIs) are proposed but they are constrained by (1) fragmented attention signals scattered across layers, leading to suboptimal localization and (2) relying on question- or redundant-text-conditioned attention extraction. Our analysis reveals three patterns: MLLMs may attend to the correct region yet generate incorrect coordinates, where-to-look attention is often fragmented across layers, and attention extraction is query-sensitive. Motivated by these, We propose ConFoThinking, a Consolidated-Focused-Attention-Driven Thinking framework that learns to aggregate attention into a designated intermediate layer, from which we mine and zoom in salient regions for downstream visual understanding. Moreover, we extract attention using concise semantic cues of what to look into, which mitigates the semantic noise introduced by question- or redundant-text-based attention extraction. Experiments across five VQA benchmarks demonstrate ConFoThinking significantly improves perception performance. The code, checkpoints, and dataset will be released after being accepted.

Zhaodong Wu, Haochen Xue, Qi Cao, Wenqi Mo, Yu Pei, Wenqi Xu, Jionglong Su, Yang Liu• 2026

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringGQA
Accuracy74.9
505
Visual Question AnsweringInfoVQA (val)
Accuracy87.9
91
Visual Question AnsweringV*Bench
Accuracy92.1
84
Visual Question AnsweringHRBench 4K
FSP Score92.8
15
Visual Question AnsweringHRBench-8K
FSP87.3
15
Showing 5 of 5 rows

Other info

Follow for update