Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

WeThink: Toward General-purpose Vision-Language Reasoning via Reinforcement Learning

About

Building on the success of text-based reasoning models like DeepSeek-R1, extending these capabilities to multimodal reasoning holds great promise. While recent works have attempted to adapt DeepSeek-R1-style reinforcement learning (RL) training paradigms to multimodal large language models (MLLM), focusing on domain-specific tasks like math and visual perception, a critical question remains: How can we achieve the general-purpose visual-language reasoning through RL? To address this challenge, we make three key efforts: (1) A novel Scalable Multimodal QA Synthesis pipeline that autonomously generates context-aware, reasoning-centric question-answer (QA) pairs directly from the given images. (2) The open-source WeThink dataset containing over 120K multimodal QA pairs with annotated reasoning paths, curated from 18 diverse dataset sources and covering various question domains. (3) A comprehensive exploration of RL on our dataset, incorporating a hybrid reward mechanism that combines rule-based verification with model-based assessment to optimize RL training efficiency across various task domains. Across 14 diverse MLLM benchmarks, we demonstrate that our WeThink dataset significantly enhances performance, from mathematical reasoning to diverse general multimodal tasks. Moreover, we show that our automated data pipeline can continuously increase data diversity to further improve model performance.

Jie Yang, Feipeng Ma, Zitian Wang, Dacheng Yin, Kang Rong, Fengyun Rao, Ruimao Zhang• 2025

Related benchmarks

TaskDatasetResultRank
Visual GroundingDIOR-RSVG
Accuracy@0.534.51
25
Mathematical ReasoningMathVista
Accuracy (All)70.9
23
General Multimodal UnderstandingGeneral Multimodal Evaluation Suite (MMMU, MMBench, MME, ChartQA, AI2D, HallBench)
MMMU (Val)50.9
14
Visual Perception and ReasoningV* Bench 1.0 (test)
Attribute Score82.61
13
Visual GroundingVRSBench Ref
IoU@5035.56
10
Visual Question AnsweringRSFG-SC
Scene Accuracy60.12
10
Visual Question AnsweringVRSBench
Avg@562.17
10
Visual Question AnsweringRSFG-VQA
Avg@50.5504
10
Visual Question AnsweringRSVQA
Avg@540.74
10
Showing 9 of 9 rows

Other info

Follow for update