Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Residual Policy Learning

About

We present Residual Policy Learning (RPL): a simple method for improving nondifferentiable policies using model-free deep reinforcement learning. RPL thrives in complex robotic manipulation tasks where good but imperfect controllers are available. In these tasks, reinforcement learning from scratch remains data-inefficient or intractable, but learning a residual on top of the initial controller can yield substantial improvements. We study RPL in six challenging MuJoCo tasks involving partial observability, sensor noise, model misspecification, and controller miscalibration. For initial controllers, we consider both hand-designed policies and model-predictive controllers with known or learned transition models. By combining learning with control algorithms, RPL can perform long-horizon, sparse-reward tasks for which reinforcement learning alone fails. Moreover, we find that RPL consistently and substantially improves on the initial controllers. We argue that RPL is a promising approach for combining the complementary strengths of deep reinforcement learning and robotic control, pushing the boundaries of what either can achieve independently. Video and code at https://k-r-allen.github.io/residual-policy-learning/.

Tom Silver, Kelsey Allen, Josh Tenenbaum, Leslie Kaelbling• 2018

Related benchmarks

TaskDatasetResultRank
Autonomous RacingSimulated Racetracks (test)
Total Race Time32.71
42
Autonomous RacingSimulated Racetracks (train)
Total Race Time35.78
42
Lift BrickANYTASK
Normalized open-ended DTW Cost0.132
8
open drawerANYTASK
Normalized open-ended DTW0.131
8
Lift BananaANYTASK
Normalized open-ended DTW0.142
8
Lift PeachANYTASK
Normalized Open-Ended DTW12
8
Place Strawberry In BowlANYTASK
Normalized Open-ended DTW0.163
8
Push Pear to CenterANYTASK
Normalized DTW13.6
8
Put Object In Closed DrawerANYTASK
Normalized Open-ended DTW0.264
8
Stack Banana on CanANYTASK
Normalized DTW0.152
8
Showing 10 of 21 rows

Other info

Follow for update