Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Optimizing Neurorobot Policy under Limited Demonstration Data through Preference Regret

About

Robot reinforcement learning from demonstrations (RLfD) assumes that expert data is abundant; this is usually unrealistic in the real world given data scarcity as well as high collection cost. Furthermore, imitation learning algorithms assume that the data is independently and identically distributed, which ultimately results in poorer performance as gradual errors emerge and compound within test-time trajectories. We address these issues by introducing the "master your own expertise" (MYOE) framework, a self-imitation framework that enables robotic agents to learn complex behaviors from limited demonstration data samples. Inspired by human perception and action, we propose and design what we call the queryable mixture-of-preferences state space model (QMoP-SSM), which estimates the desired goal at every time step. These desired goals are used in computing the "preference regret", which is used to optimize the robot control policy. Our experiments demonstrate the robustness, adaptability, and out-of-sample performance of our agent compared to other state-of-the-art RLfD schemes. The GitHub repository that supports this work can be found at: https://github.com/rxng8/neurorobot-preference-regret-learning.

Viet Dung Nguyen, Yuhang Song, Anh Nguyen, Jamison Heard, Reynold Bailey, Alexander Ororbia• 2026

Related benchmarks

TaskDatasetResultRank
Robot ManipulationFranka-Kitchen
Light On Success100
9
Block pickingPX100 Block Picking
Success Rate89
4
Reach7bot robot
Success Rate34
4
Robotic ManipulationRoboSuite
Lift Success Rate100
4
Robotic ManipulationMeta-World
Button Press Success Rate100
4
Showing 5 of 5 rows

Other info

Follow for update