Robo-Dopamine: General Process Reward Modeling for High-Precision Robotic Manipulation
About
The primary obstacle for applying reinforcement learning (RL) to real-world robotics is the design of effective reward functions. While recently learning-based Process Reward Models (PRMs) are a promising direction, they are often hindered by two fundamental limitations: their reward models lack step-aware understanding and rely on single-view perception, leading to unreliable assessments of fine-grained manipulation progress; and their reward shaping procedures are theoretically unsound, often inducing a semantic trap that misguides policy optimization. To address these, we introduce Dopamine-Reward, a novel reward modeling method for learning a general-purpose, step-aware process reward model from multi-view inputs. At its core is our General Reward Model (GRM), trained on a vast 3,400+ hour dataset, which leverages Step-wise Reward Discretization for structural understanding and Multi-Perspective Reward Fusion to overcome perceptual limitations. Building upon Dopamine-Reward, we propose Dopamine-RL, a robust policy learning framework that employs a theoretically-sound Policy-Invariant Reward Shaping method, which enables the agent to leverage dense rewards for efficient self-improvement without altering the optimal policy, thereby fundamentally avoiding the semantic trap. Extensive experiments across diverse simulated and real-world tasks validate our approach. GRM achieves state-of-the-art accuracy in reward assessment, and Dopamine-RL built on GRM significantly improves policy learning efficiency. For instance, after GRM is adapted to a new task in a one-shot manner from a single expert trajectory, the resulting reward model enables Dopamine-RL to improve the policy from near-zero to 95% success with only 150 online rollouts (approximately 1 hour of real robot interaction), while retaining strong generalization across tasks. Project website: https://robo-dopamine.github.io
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Task Completion Classification | SARM (real-world rollouts) | Average Accuracy92.8 | 8 | |
| Video Frame Rank-Correlation | DROID | VOC Rank-Correlation (Sparse)0.99 | 6 | |
| Video Frame Rank-Correlation | AGIBOT-World | VOC (Sparse)0.97 | 6 | |
| Video Frame Rank-Correlation | RoboBrain-X | VOC (Sparse)0.92 | 6 | |
| Video Frame Rank-Correlation | LIBERO | VOC (Sparse)95 | 6 | |
| Video Frame Rank-Correlation | RoboCasa | VOC (Sparse)0.99 | 6 | |
| Video Frame Rank-Correlation | RoboTwin 2.0 | VOC (Sparse)96 | 6 | |
| Video Frame Rank-Correlation | EgoDex | VOC Score (Sparse)0.88 | 6 | |
| Robot Policy Learning | Simulation 10 Tasks | SR0.81 | 3 | |
| Robot Policy Learning | Real-World 8 Tasks | Success Rate (%)95.2 | 3 |