Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Relay Policy Learning: Solving Long-Horizon Tasks via Imitation and Reinforcement Learning

About

We present relay policy learning, a method for imitation and reinforcement learning that can solve multi-stage, long-horizon robotic tasks. This general and universally-applicable, two-phase approach consists of an imitation learning stage that produces goal-conditioned hierarchical policies, and a reinforcement learning phase that finetunes these policies for task performance. Our method, while not necessarily perfect at imitation learning, is very amenable to further improvement via environment interaction, allowing it to scale to challenging long-horizon tasks. We simplify the long-horizon policy learning problem by using a novel data-relabeling algorithm for learning goal-conditioned hierarchical policies, where the low-level only acts for a fixed number of steps, regardless of the goal achieved. While we rely on demonstration data to bootstrap policy learning, we do not assume access to demonstrations of every specific tasks that is being solved, and instead leverage unstructured and unsegmented demonstrations of semantically meaningful behaviors that are not only less burdensome to provide, but also can greatly facilitate further improvement using reinforcement learning. We demonstrate the effectiveness of our method on a number of multi-stage, long-horizon manipulation tasks in a challenging kitchen simulation environment. Videos are available at https://relay-policy-learning.github.io/

Abhishek Gupta, Vikash Kumar, Corey Lynch, Sergey Levine, Karol Hausman• 2019

Related benchmarks

TaskDatasetResultRank
Offline goal-conditioned RLProcgen Maze 1000 (test)
Normalized Score14.5
5
Offline goal-conditioned RLVisual AntMaze navigate
Normalized Score21.4
5
Offline goal-conditioned RLRoboverse unseen manipulation tasks
Normalized Score26.4
5
Offline goal-conditioned RLProcgen Maze 500 (train)
Normalized Score14.3
5
Offline goal-conditioned RLProcgen Maze 500 (test)
Normalized Score11.2
5
Offline goal-conditioned RLProcgen Maze 1000 (train)
Normalized Score15
5
Offline goal-conditioned RLVisual AntMaze diverse
Normalized Score35.1
5
Offline goal-conditioned RLVisual AntMaze (play)
Normalized Score23.8
5
Multi-task Robotic ManipulationCALVIN (test)
Success Rate (1 task)66.2
4
Showing 9 of 9 rows

Other info

Follow for update