Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Explore and Control with Adversarial Surprise

About

Unsupervised reinforcement learning (RL) studies how to leverage environment statistics to learn useful behaviors without the cost of reward engineering. However, a central challenge in unsupervised RL is to extract behaviors that meaningfully affect the world and cover the range of possible outcomes, without getting distracted by inherently unpredictable, uncontrollable, and stochastic elements in the environment. To this end, we propose an unsupervised RL method designed for high-dimensional, stochastic environments based on an adversarial game between two policies (which we call Explore and Control) controlling a single body and competing over the amount of observation entropy the agent experiences. The Explore agent seeks out states that maximally surprise the Control agent, which in turn aims to minimize surprise, and thereby manipulate the environment to return to familiar and predictable states. The competition between these two policies drives them to seek out increasingly surprising parts of the environment while learning to gain mastery over them. We show formally that the resulting algorithm maximizes coverage of the underlying state in block MDPs with stochastic observations, providing theoretical backing to our hypothesis that this procedure avoids uncontrollable and stochastic distractions. Our experiments further demonstrate that Adversarial Surprise leads to the emergence of complex and meaningful skills, and outperforms state-of-the-art unsupervised reinforcement learning methods in terms of both exploration and zero-shot transfer to downstream tasks.

Arnaud Fickinger, Natasha Jaques, Samyak Parajuli, Michael Chang, Nicholas Rhinehart, Glen Berseth, Stuart Russell, Sergey Levine• 2021

Related benchmarks

TaskDatasetResultRank
Bottom RightURLB Jaco 1.0 (test)
Mean Score166
12
Top LeftURLB Jaco 1.0 (test)
Mean Score143
12
Top RightURLB Jaco 1.0 (test)
Mean Score139
12
FlipURLB Walker 1.0 (test)
Mean Score491
12
Bottom LeftURLB Jaco 1.0 (test)
Mean Score116
12
StandURLB Walker 1.0 (test)
Mean Score917
12
Unsupervised Reinforcement LearningURL Benchmark Jaco
Reach Bottom Left1
12
RunURLB Walker 1.0 (test)
Mean Score247
12
WalkURLB Walker 1.0 (test)
Mean Score675
12
WalkURLB Quadruped 1.0 (test)
Mean Score353
12
Showing 10 of 15 rows

Other info

Follow for update