VLM-FO1: Bridging the Gap Between High-Level Reasoning and Fine-Grained Perception in VLMs
About
Vision-Language Models (VLMs) excel at high-level scene understanding but falter on fine-grained perception tasks requiring precise localization. This failure stems from a fundamental mismatch, as generating exact numerical coordinates is a challenging task for language-centric architectures. In this paper, we introduce VLM-FO1, a novel framework that overcomes this limitation by reframing object-centric perception from a brittle coordinate generation problem into a robust feature retrieval task. Our method operates as a plug-and-play module that integrates with any pre-trained VLM. It leverages a Hybrid Fine-grained Region Encoder (HFRE), featuring a dual vision encoder, to generate powerful region tokens rich in both semantic and spatial detail. A token-based referencing system then enables the LLM to seamlessly reason about and ground language in these specific visual regions. Experiments show that VLM-FO1 achieves state-of-the-art performance across a diverse suite of benchmarks, demonstrating exceptional capabilities in object grounding, region generational understanding, and visual region reasoning. Crucially, our two-stage training strategy ensures that these perception gains are achieved without compromising the base model's general visual understanding capabilities. VLM-FO1 establishes an effective and flexible paradigm for building perception-aware VLMs, bridging the gap between high-level reasoning and fine-grained visual grounding.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Referring Expression Comprehension | RefCOCO+ (val) | Accuracy86.4 | 345 | |
| Referring Expression Comprehension | RefCOCO (val) | -- | 335 | |
| Referring Expression Comprehension | RefCOCO (testA) | -- | 333 | |
| Referring Expression Comprehension | RefCOCOg (val) | Accuracy88.9 | 291 | |
| Referring Expression Comprehension | RefCOCOg (test) | -- | 291 | |
| Referring Expression Comprehension | RefCOCO+ (testB) | Accuracy80.6 | 235 | |
| Referring Expression Comprehension | RefCOCO+ (testA) | -- | 207 | |
| Referring Expression Comprehension | RefCOCO (testB) | -- | 196 | |
| Object Detection | ODinW-13 | AP44 | 98 | |
| Referring Expression Comprehension | RefCOCO v1 (val) | Top-1 Accuracy91.1 | 49 |