Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Robo-Dopamine: General Process Reward Modeling for High-Precision Robotic Manipulation

About

The primary obstacle for applying reinforcement learning (RL) to real-world robotics is the design of effective reward functions. While recently learning-based Process Reward Models (PRMs) are a promising direction, they are often hindered by two fundamental limitations: their reward models lack step-aware understanding and rely on single-view perception, leading to unreliable assessments of fine-grained manipulation progress; and their reward shaping procedures are theoretically unsound, often inducing a semantic trap that misguides policy optimization. To address these, we introduce Dopamine-Reward, a novel reward modeling method for learning a general-purpose, step-aware process reward model from multi-view inputs. At its core is our General Reward Model (GRM), trained on a vast 3,400+ hour dataset, which leverages Step-wise Reward Discretization for structural understanding and Multi-Perspective Reward Fusion to overcome perceptual limitations. Building upon Dopamine-Reward, we propose Dopamine-RL, a robust policy learning framework that employs a theoretically-sound Policy-Invariant Reward Shaping method, which enables the agent to leverage dense rewards for efficient self-improvement without altering the optimal policy, thereby fundamentally avoiding the semantic trap. Extensive experiments across diverse simulated and real-world tasks validate our approach. GRM achieves state-of-the-art accuracy in reward assessment, and Dopamine-RL built on GRM significantly improves policy learning efficiency. For instance, after GRM is adapted to a new task in a one-shot manner from a single expert trajectory, the resulting reward model enables Dopamine-RL to improve the policy from near-zero to 95% success with only 150 online rollouts (approximately 1 hour of real robot interaction), while retaining strong generalization across tasks. Project website: https://robo-dopamine.github.io

Huajie Tan, Sixiang Chen, Yijie Xu, Zixiao Wang, Yuheng Ji, Cheng Chi, Yaoxu Lyu, Zhongxia Zhao, Xiansheng Chen, Peterson Co, Shaoxuan Xie, Guocai Yao, Pengwei Wang, Zhongyuan Wang, Shanghang Zhang• 2025

Related benchmarks

TaskDatasetResultRank
Task Completion ClassificationSARM (real-world rollouts)
Average Accuracy92.8
8
Video Frame Rank-CorrelationDROID
VOC Rank-Correlation (Sparse)0.99
6
Video Frame Rank-CorrelationAGIBOT-World
VOC (Sparse)0.97
6
Video Frame Rank-CorrelationRoboBrain-X
VOC (Sparse)0.92
6
Video Frame Rank-CorrelationLIBERO
VOC (Sparse)95
6
Video Frame Rank-CorrelationRoboCasa
VOC (Sparse)0.99
6
Video Frame Rank-CorrelationRoboTwin 2.0
VOC (Sparse)96
6
Video Frame Rank-CorrelationEgoDex
VOC Score (Sparse)0.88
6
Robot Policy LearningSimulation 10 Tasks
SR0.81
3
Robot Policy LearningReal-World 8 Tasks
Success Rate (%)95.2
3
Showing 10 of 10 rows

Other info

GitHub

Follow for update