Focus, Don't Prune: Identifying Instruction-Relevant Regions for Information-Rich Image Understanding
About
Large Vision-Language Models (LVLMs) have shown strong performance across various multimodal tasks by leveraging the reasoning capabilities of Large Language Models (LLMs). However, processing visually complex and information-rich images, such as infographics or document layouts, requires these models to generate a large number of visual tokens, leading to significant computational overhead. To address this, we propose PinPoint, a novel two-stage framework that first identifies instruction-relevant image regions and then refines them to extract fine-grained visual features for improved reasoning and efficiency. Central to our approach is the Instruction-Region Alignment, which localizes relevant regions using both visual input and textual instructions. We further introduce new annotations that provide richer ground-truth supervision for instruction-relevant regions across challenging VQA benchmarks: InfographicVQA, MultiPageDocVQA, and SinglePageDocVQA. Experimental results show that PinPoint not only achieves superior accuracy compared to existing methods but also reduces computational overhead by minimizing irrelevant visual tokens.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Hallucination Evaluation | AMBER | CHAIR8 | 172 | |
| Multimodal Understanding | MMMU | MMMU Score35 | 59 | |
| Visual Question Answering | InfoVQA | ANLS Score71.4 | 31 | |
| Visual Question Answering | TextVQA | Score72.93 | 20 | |
| Visual Question Answering | SPDocVQA | ANLS89.77 | 12 | |
| Visual Question Answering | MPDocVQA | ANLS0.6723 | 12 | |
| Visual Question Answering | GQA | Accuracy76.24 | 12 | |
| Object Hallucination Evaluation | MSCOCO (val) | CHAIRS25.6 | 6 | |
| Multimodal Understanding | MMMU-Pro standard 10 | Score19.9 | 4 | |
| Hallucination Evaluation | MSCOCO | Rand. Score89 | 2 |