Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Jigsaw-R1: A Study of Rule-based Visual Reinforcement Learning with Jigsaw Puzzles

About

The application of rule-based reinforcement learning (RL) to multimodal large language models (MLLMs) introduces unique challenges and potential deviations from findings in text-only domains, particularly for perception-heavy tasks. This paper provides a comprehensive study of rule-based visual RL, using jigsaw puzzles as a structured experimental framework. Jigsaw puzzles offer inherent ground truth, adjustable difficulty, and demand complex decision-making, making them ideal for this study. Our research reveals several key findings: \textit{Firstly,} we find that MLLMs, initially performing near to random guessing on the simplest jigsaw puzzles, achieve near-perfect accuracy and generalize to complex, unseen configurations through fine-tuning. \textit{Secondly,} training on jigsaw puzzles can induce generalization to other visual tasks, with effectiveness tied to specific task configurations. \textit{Thirdly,} MLLMs can learn and generalize with or without explicit reasoning, though open-source models often favor direct answering. Consequently, even when trained for step-by-step reasoning, they can ignore the thinking process in deriving the final answer. \textit{Fourthly,} we observe that complex reasoning patterns appear to be pre-existing rather than emergent, with their frequency increasing alongside training and task difficulty. \textit{Finally,} our results demonstrate that RL exhibits more effective generalization than Supervised Fine-Tuning (SFT), and an initial SFT cold start phase can hinder subsequent RL optimization. Although these observations are based on jigsaw puzzles and may vary across other visual tasks, this research contributes a valuable piece of jigsaw to the larger puzzle of collective understanding rule-based visual RL and its potential in multimodal learning. The code is available at: https://github.com/zifuwanggg/Jigsaw-R1

Zifu Wang, Junyi Zhu, Bo Tang, Zhiyu Li, Feiyu Xiong, Jiaqian Yu, Matthew B. Blaschko• 2025

Related benchmarks

TaskDatasetResultRank
Object Hallucination EvaluationPOPE--
935
Multimodal UnderstandingSEED-Bench
Accuracy72.46
203
Multimodal UnderstandingMMStar
Accuracy55.53
197
Multimodal UnderstandingMME
MME Score2.18e+3
158
Visual PerceptionMMVP
Accuracy69.66
47
Multimodal ReasoningMMT-Bench
Accuracy57.53
23
Color UnderstandingColorBench
Accuracy32.25
18
Visual GroundingLisa Grounding
Accuracy61.28
18
2D Computer Vision BenchmarkingCVBench2D
Accuracy70.87
13
Showing 9 of 9 rows

Other info

Follow for update