Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

CropVLM: Learning to Zoom for Fine-Grained Vision-Language Perception

About

Vision-Language Models (VLMs) often struggle with tasks that require fine-grained image understanding, such as scene-text recognition or document analysis, due to perception limitations and visual fragmentation. To address these challenges, we introduce CropVLM as an external low-cost method for boosting performance, enabling VLMs to dynamically ''zoom in'' on relevant image regions, enhancing their ability to capture fine details. CropVLM is trained using reinforcement learning, without using human-labeled bounding boxes as a supervision signal, and without expensive synthetic evaluations. The model is trained once and can be paired with both open-source and proprietary VLMs to improve their performance. Our approach delivers significant improvements on tasks that require high-resolution image understanding, notably for benchmarks that are out-of-domain for the target VLM, without modifying or fine-tuning the VLM, thus avoiding catastrophic forgetting.

Miguel Carvalho, Helder Dias, Bruno Martins• 2025

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringTextVQA
Accuracy75.72
1285
Visual Question AnsweringDocVQA
Accuracy92.83
162
Visual Question AnsweringInfoVQA
Accuracy75.6
135
Document Visual Question AnsweringDocVQA
Accuracy84.41
132
Visual Question AnsweringTextVQA
TextVQA Accuracy80.12
67
Visual Question AnsweringHRBench 4K
Accuracy0.6638
54
Information Visual Question AnsweringInfoVQA
Accuracy55.95
52
Visual Question AnsweringHRBench-8K
Accuracy65.63
51
Visual Question AnsweringV*
Accuracy74.35
45
Visual Question AnsweringST-VQA
Accuracy69.09
30
Showing 10 of 12 rows

Other info

Follow for update