Relay Policy Learning: Solving Long-Horizon Tasks via Imitation and Reinforcement Learning
About
We present relay policy learning, a method for imitation and reinforcement learning that can solve multi-stage, long-horizon robotic tasks. This general and universally-applicable, two-phase approach consists of an imitation learning stage that produces goal-conditioned hierarchical policies, and a reinforcement learning phase that finetunes these policies for task performance. Our method, while not necessarily perfect at imitation learning, is very amenable to further improvement via environment interaction, allowing it to scale to challenging long-horizon tasks. We simplify the long-horizon policy learning problem by using a novel data-relabeling algorithm for learning goal-conditioned hierarchical policies, where the low-level only acts for a fixed number of steps, regardless of the goal achieved. While we rely on demonstration data to bootstrap policy learning, we do not assume access to demonstrations of every specific tasks that is being solved, and instead leverage unstructured and unsegmented demonstrations of semantically meaningful behaviors that are not only less burdensome to provide, but also can greatly facilitate further improvement using reinforcement learning. We demonstrate the effectiveness of our method on a number of multi-stage, long-horizon manipulation tasks in a challenging kitchen simulation environment. Videos are available at https://relay-policy-learning.github.io/
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Offline goal-conditioned RL | Procgen Maze 1000 (test) | Normalized Score14.5 | 5 | |
| Offline goal-conditioned RL | Visual AntMaze navigate | Normalized Score21.4 | 5 | |
| Offline goal-conditioned RL | Roboverse unseen manipulation tasks | Normalized Score26.4 | 5 | |
| Offline goal-conditioned RL | Procgen Maze 500 (train) | Normalized Score14.3 | 5 | |
| Offline goal-conditioned RL | Procgen Maze 500 (test) | Normalized Score11.2 | 5 | |
| Offline goal-conditioned RL | Procgen Maze 1000 (train) | Normalized Score15 | 5 | |
| Offline goal-conditioned RL | Visual AntMaze diverse | Normalized Score35.1 | 5 | |
| Offline goal-conditioned RL | Visual AntMaze (play) | Normalized Score23.8 | 5 | |
| Multi-task Robotic Manipulation | CALVIN (test) | Success Rate (1 task)66.2 | 4 |