Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Curiosity-driven Exploration by Self-supervised Prediction

About

In many real-world scenarios, rewards extrinsic to the agent are extremely sparse, or absent altogether. In such cases, curiosity can serve as an intrinsic reward signal to enable the agent to explore its environment and learn skills that might be useful later in its life. We formulate curiosity as the error in an agent's ability to predict the consequence of its own actions in a visual feature space learned by a self-supervised inverse dynamics model. Our formulation scales to high-dimensional continuous state spaces like images, bypasses the difficulties of directly predicting pixels, and, critically, ignores the aspects of the environment that cannot affect the agent. The proposed approach is evaluated in two environments: VizDoom and Super Mario Bros. Three broad settings are investigated: 1) sparse extrinsic reward, where curiosity allows for far fewer interactions with the environment to reach the goal; 2) exploration with no extrinsic reward, where curiosity pushes the agent to explore more efficiently; and 3) generalization to unseen scenarios (e.g. new levels of the same game) where the knowledge gained from earlier experience helps the agent explore new places much faster than starting from scratch. Demo video and code available at https://pathak22.github.io/noreward-rl/

Deepak Pathak, Pulkit Agrawal, Alexei A. Efros, Trevor Darrell• 2017

Related benchmarks

TaskDatasetResultRank
Reinforcement LearningAtari 2600 MONTEZUMA'S REVENGE
Score2.68
45
Reinforcement LearningAtari 2600 Montezuma's Revenge ALE (test)
Score400
24
State ExplorationMaze2D Square-b
State Coverage Ratio57
22
Reinforcement LearningAtari 2600 Gravitar ALE (test)
Score3.37e+3
19
Goal-oriented dialogueMovie
Success Rate53.11
17
Reinforcement LearningAtari 2600 Qbert
Score1.06e+3
15
Unsupervised Reinforcement LearningURL Benchmark Jaco
Reach Bottom Left9
12
StandURLB Walker 1.0 (test)
Mean Score868
12
Unsupervised Reinforcement LearningURL Benchmark Quadruped
Jump Score225
12
Bottom LeftURLB Jaco 1.0 (test)
Mean Score112
12
Showing 10 of 60 rows

Other info

Follow for update