Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Reinforcement Learning with Prototypical Representations

About

Learning effective representations in image-based environments is crucial for sample efficient Reinforcement Learning (RL). Unfortunately, in RL, representation learning is confounded with the exploratory experience of the agent -- learning a useful representation requires diverse data, while effective exploration is only possible with coherent representations. Furthermore, we would like to learn representations that not only generalize across tasks but also accelerate downstream exploration for efficient task-specific training. To address these challenges we propose Proto-RL, a self-supervised framework that ties representation learning with exploration through prototypical representations. These prototypes simultaneously serve as a summarization of the exploratory experience of an agent as well as a basis for representing observations. We pre-train these task-agnostic representations and prototypes on environments without downstream task information. This enables state-of-the-art downstream policy learning on a set of difficult continuous control tasks.

Denis Yarats, Rob Fergus, Alessandro Lazaric, Lerrel Pinto• 2021

Related benchmarks

TaskDatasetResultRank
State ExplorationMaze2D Square-b
State Coverage Ratio40
22
Top RightURLB Jaco 1.0 (test)
Mean Score146
12
Bottom LeftURLB Jaco 1.0 (test)
Mean Score130
12
StandURLB Quadruped 1.0 (test)
Mean Score702
12
Top LeftURLB Jaco 1.0 (test)
Mean Score134
12
Unsupervised Reinforcement LearningURL Benchmark (Walker)
Flip Score171
12
Bottom RightURLB Jaco 1.0 (test)
Mean Score131
12
RunURLB Walker 1.0 (test)
Mean Score225
12
RunURLB Quadruped 1.0 (test)
Mean Score310
12
WalkURLB Quadruped 1.0 (test)
Mean Score348
12
Showing 10 of 21 rows

Other info

Follow for update