Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Learning to Reach Goals via Iterated Supervised Learning

About

Current reinforcement learning (RL) algorithms can be brittle and difficult to use, especially when learning goal-reaching behaviors from sparse rewards. Although supervised imitation learning provides a simple and stable alternative, it requires access to demonstrations from a human supervisor. In this paper, we study RL algorithms that use imitation learning to acquire goal reaching policies from scratch, without the need for expert demonstrations or a value function. In lieu of demonstrations, we leverage the property that any trajectory is a successful demonstration for reaching the final state in that same trajectory. We propose a simple algorithm in which an agent continually relabels and imitates the trajectories it generates to progressively learn goal-reaching behaviors from scratch. Each iteration, the agent collects new trajectories using the latest policy, and maximizes the likelihood of the actions along these trajectories under the goal that was actually reached, so as to improve the policy. We formally show that this iterated supervised learning procedure optimizes a bound on the RL objective, derive performance bounds of the learned policy, and empirically demonstrate improved goal-reaching performance and robustness over current RL algorithms in several benchmark tasks.

Dibya Ghosh, Abhishek Gupta, Ashwin Reddy, Justin Fu, Coline Devin, Benjamin Eysenbach, Sergey Levine• 2019

Related benchmarks

TaskDatasetResultRank
Goal ReachingRoboKitchen (test)
Success Rate0.00e+0
16
Offline Goal-Conditioned Reinforcement LearningFetchSlide (offline)
Discounted Return1.75
10
Offline Goal-Conditioned Reinforcement LearningHandReach (offline)
Discounted Return1.37
10
Offline Goal-Conditioned Reinforcement LearningFetchPush (offline)
Discounted Return13.4
10
Offline Goal-Conditioned Reinforcement LearningFetchReach (offline)
Discounted Return20.91
10
Offline Goal-Conditioned Reinforcement LearningFetchPick (offline)
Discounted Return8.94
10
Goal ReachingRoboYoga Quadruped (test)
Goal Success Rate15.83
6
Goal ReachingRoboYoga Walker (test)
Goal Success Rate1.11
6
Goal ReachingRoboBins (test)
Goal Success Rate7.94
6
Offline goal-conditioned RLVisual AntMaze navigate
Normalized Score33.2
5
Showing 10 of 19 rows

Other info

Follow for update