Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Puzzle Curriculum GRPO for Vision-Centric Reasoning

About

Recent reinforcement learning (RL) approaches like outcome-supervised GRPO have advanced chain-of-thought reasoning in Vision Language Models (VLMs), yet key issues linger: (i) reliance on costly and noisy hand-curated annotations or external verifiers; (ii) flat and sparse reward schemes in GRPO; and (iii) logical inconsistency between a chain's reasoning and its final answer. We present Puzzle Curriculum GRPO (PC-GRPO), a supervision-free recipe for RL with Verifiable Rewards (RLVR) that strengthens visual reasoning in VLMs without annotations or external verifiers. PC-GRPO replaces labels with three self-supervised puzzle environments: PatchFit, Rotation (with binary rewards) and Jigsaw (with graded partial credit mitigating reward sparsity). To counter flat rewards and vanishing group-relative advantages, we introduce a difficulty-aware curriculum that dynamically weights samples and peaks at medium difficulty. We further monitor Reasoning-Answer Consistency (RAC) during post-training: mirroring reports for vanilla GRPO in LLMs, RAC typically rises early then degrades; our curriculum delays this decline, and consistency-enforcing reward schemes further boost RAC. RAC correlates with downstream accuracy. Across diverse benchmarks and on Qwen-7B and Qwen-3B backbones, PC-GRPO improves reasoning quality, training stability, and end-task accuracy, offering a practical path to scalable, verifiable, and interpretable RL post-training for VLMs.

Ahmadreza Jeddi, Hakki Can Karaimer, Hue Nguyen, Zhongling Wang, Ke Zhao, Javad Rajabi, Ran Zhang, Raghav Goyal, Babak Taati, Radek Grzeszczuk• 2025

Related benchmarks

TaskDatasetResultRank
Object Hallucination EvaluationPOPE--
935
Multimodal EvaluationMME
Score2.57e+3
557
Multimodal UnderstandingSEED-Bench
Accuracy71.9
203
Multimodal UnderstandingMMStar
Accuracy57.53
197
Multimodal UnderstandingMME
MME Score2.22e+3
158
Multimodal EvaluationSEED-Bench
Accuracy77.01
80
Visual PerceptionMMVP
Accuracy69
47
Multimodal EvaluationMMStar
Accuracy65.8
46
Multimodal ReasoningMMT-Bench
Accuracy57.88
23
Vision UnderstandingCVBench 2D
Accuracy77.76
22
Showing 10 of 18 rows

Other info

Follow for update