Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

R1-Onevision: Advancing Generalized Multimodal Reasoning through Cross-Modal Formalization

About

Large Language Models have demonstrated remarkable reasoning capability in complex textual tasks. However, multimodal reasoning, which requires integrating visual and textual information, remains a significant challenge. Existing visual-language models often struggle to effectively analyze and reason visual content, resulting in suboptimal performance on complex reasoning tasks. Moreover, the absence of comprehensive benchmarks hinders the accurate assessment of multimodal reasoning capabilities. In this paper, we introduce R1-Onevision, a multimodal reasoning model designed to bridge the gap between visual perception and deep reasoning. To achieve this, we propose a cross-modal reasoning pipeline that transforms images into formal textural representations, enabling precise language-based reasoning. Leveraging this pipeline, we construct the R1-Onevision dataset which provides detailed, step-by-step multimodal reasoning annotations across diverse domains. We further develop the R1-Onevision model through supervised fine-tuning and reinforcement learning to cultivate advanced reasoning and robust generalization abilities. To comprehensively evaluate multimodal reasoning performance across different grades, we introduce R1-Onevision-Bench, a benchmark aligned with human educational stages, covering exams from junior high school to university and beyond. Experimental results show that R1-Onevision achieves state-of-the-art performance, outperforming models such as GPT-4o and Qwen2.5-VL on multiple challenging multimodal reasoning benchmarks.

Yi Yang, Xiaoxuan He, Hongkun Pan, Xiyan Jiang, Yan Deng, Xingtao Yang, Haoyu Lu, Dacheng Yin, Fengyun Rao, Minfeng Zhu, Bo Zhang, Wei Chen• 2025

Related benchmarks

TaskDatasetResultRank
Object Hallucination EvaluationPOPE
Accuracy83.1
1455
Multimodal UnderstandingMMBench
Accuracy75.6
637
Multimodal UnderstandingMMMU
Accuracy54.3
437
Mathematical ReasoningMathVista
Score64.1
385
Multimodal Capability EvaluationMM-Vet
Score65.2
345
Multimodal UnderstandingMMStar
Accuracy54.1
324
Object HallucinationPOPE Adversarial
Accuracy82.5
288
Object HallucinationPOPE (Random)
F1 Score83.8
285
Visual Mathematical ReasoningMathVista
Accuracy64.1
278
Object HallucinationPOPE Popular
F1 Score83.1
273
Showing 10 of 181 rows
...

Other info

Follow for update