Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

CARE What Fails: Contrastive Anchored-REflection for Verifiable Multimodal

About

Group-relative reinforcement learning with verifiable rewards (RLVR) often wastes the most informative data it already has the failures. When all rollouts are wrong, gradients stall; when one happens to be correct, the update usually ignores why the others are close-but-wrong, and credit can be misassigned to spurious chains. We present CARE (Contrastive Anchored REflection), a failure-centric post-training framework for multimodal reasoning that turns errors into supervision. CARE combines: (i) an anchored-contrastive objective that forms a compact subgroup around the best rollout and a set of semantically proximate hard negatives, performs within-subgroup z-score normalization with negative-only scaling, and includes an all-negative rescue to prevent zero-signal batches; and (ii) Reflection-Guided Resampling (RGR), a one-shot structured self-repair that rewrites a representative failure and re-scores it with the same verifier, converting near-misses into usable positives without any test-time reflection. CARE improves accuracy and training smoothness while explicitly increasing the share of learning signal that comes from failures. On Qwen2.5-VL-7B, CARE lifts macro-averaged accuracy by 4.6 points over GRPO across six verifiable visual-reasoning benchmarks; with Qwen3-VL-8B it reaches competitive or state-of-the-art results on MathVista and MMMU-Pro under an identical evaluation protocol.

Yongxin Wang, Zhicheng Yang, Meng Cao, Mingfei Han, Haokun Lin, Yingying Zhu, Xiaojun Chang, Xiaodan Liang• 2025

Related benchmarks

TaskDatasetResultRank
Multimodal ReasoningMMMU (val)
Accuracy71
114
Multimodal ReasoningMMMU-Pro
Accuracy46.7
55
Multimodal Mathematical ReasoningMathVista mini
Accuracy0.821
35
Multimodal ReasoningMathVerse mini
Accuracy69.7
25
Multimodal ReasoningMATH-Vision (full)
Accuracy61.7
23
Showing 5 of 5 rows

Other info

Follow for update