Reinforcement Learning with Prototypical Representations
About
Learning effective representations in image-based environments is crucial for sample efficient Reinforcement Learning (RL). Unfortunately, in RL, representation learning is confounded with the exploratory experience of the agent -- learning a useful representation requires diverse data, while effective exploration is only possible with coherent representations. Furthermore, we would like to learn representations that not only generalize across tasks but also accelerate downstream exploration for efficient task-specific training. To address these challenges we propose Proto-RL, a self-supervised framework that ties representation learning with exploration through prototypical representations. These prototypes simultaneously serve as a summarization of the exploratory experience of an agent as well as a basis for representing observations. We pre-train these task-agnostic representations and prototypes on environments without downstream task information. This enables state-of-the-art downstream policy learning on a set of difficult continuous control tasks.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| State Exploration | Maze2D Square-b | State Coverage Ratio40 | 22 | |
| Top Right | URLB Jaco 1.0 (test) | Mean Score146 | 12 | |
| Bottom Left | URLB Jaco 1.0 (test) | Mean Score130 | 12 | |
| Stand | URLB Quadruped 1.0 (test) | Mean Score702 | 12 | |
| Top Left | URLB Jaco 1.0 (test) | Mean Score134 | 12 | |
| Unsupervised Reinforcement Learning | URL Benchmark (Walker) | Flip Score171 | 12 | |
| Bottom Right | URLB Jaco 1.0 (test) | Mean Score131 | 12 | |
| Run | URLB Walker 1.0 (test) | Mean Score225 | 12 | |
| Run | URLB Quadruped 1.0 (test) | Mean Score310 | 12 | |
| Walk | URLB Quadruped 1.0 (test) | Mean Score348 | 12 |