Geo-R1: Unlocking VLM Geospatial Reasoning with Cross-View Reinforcement Learning
About
We introduce Geo-R1, a reasoning-centric post-training framework that unlocks geospatial reasoning in vision-language models by combining thinking scaffolding and elevating. In the scaffolding stage, Geo-R1 instills a ``geospatial thinking paradigm" via supervised fine-tuning on synthetic chain-of-thought exemplars, enabling models to connect visual cues with geographic priors without costly human reasoning annotations. In the elevating stage, it uses GRPO-based reinforcement learning on a weakly-supervised cross-view pairing proxy. This design supplies a verifiable and scalable reward signal: teaching models to capture and reconcile features across modalities, and harnessing reasoning for accurate prediction. Geo-R1 extends geospatial modeling from domain pretraining / supervised finetuning to reasoning-first post-training, and achieves state-of-the-art performance across various geospatial reasoning benchmarks. Our model is available at https://huggingface.co/miniHui/Geo-R1.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Visual Grounding | DIOR-RSVG | Accuracy@0.517.67 | 25 | |
| Visual Question Answering | VRSBench | Avg@557 | 10 | |
| Visual Grounding | VRSBench Ref | IoU@5017.18 | 10 | |
| Visual Question Answering | RSFG-SC | Scene Accuracy52.46 | 10 | |
| Visual Question Answering | RSFG-VQA | Avg@50.4503 | 10 | |
| Visual Question Answering | RSVQA | Avg@534.5 | 10 |