Explore and Control with Adversarial Surprise
About
Unsupervised reinforcement learning (RL) studies how to leverage environment statistics to learn useful behaviors without the cost of reward engineering. However, a central challenge in unsupervised RL is to extract behaviors that meaningfully affect the world and cover the range of possible outcomes, without getting distracted by inherently unpredictable, uncontrollable, and stochastic elements in the environment. To this end, we propose an unsupervised RL method designed for high-dimensional, stochastic environments based on an adversarial game between two policies (which we call Explore and Control) controlling a single body and competing over the amount of observation entropy the agent experiences. The Explore agent seeks out states that maximally surprise the Control agent, which in turn aims to minimize surprise, and thereby manipulate the environment to return to familiar and predictable states. The competition between these two policies drives them to seek out increasingly surprising parts of the environment while learning to gain mastery over them. We show formally that the resulting algorithm maximizes coverage of the underlying state in block MDPs with stochastic observations, providing theoretical backing to our hypothesis that this procedure avoids uncontrollable and stochastic distractions. Our experiments further demonstrate that Adversarial Surprise leads to the emergence of complex and meaningful skills, and outperforms state-of-the-art unsupervised reinforcement learning methods in terms of both exploration and zero-shot transfer to downstream tasks.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Bottom Right | URLB Jaco 1.0 (test) | Mean Score166 | 12 | |
| Top Left | URLB Jaco 1.0 (test) | Mean Score143 | 12 | |
| Top Right | URLB Jaco 1.0 (test) | Mean Score139 | 12 | |
| Flip | URLB Walker 1.0 (test) | Mean Score491 | 12 | |
| Bottom Left | URLB Jaco 1.0 (test) | Mean Score116 | 12 | |
| Stand | URLB Walker 1.0 (test) | Mean Score917 | 12 | |
| Unsupervised Reinforcement Learning | URL Benchmark Jaco | Reach Bottom Left1 | 12 | |
| Run | URLB Walker 1.0 (test) | Mean Score247 | 12 | |
| Walk | URLB Walker 1.0 (test) | Mean Score675 | 12 | |
| Walk | URLB Quadruped 1.0 (test) | Mean Score353 | 12 |