Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Emergent Complexity and Zero-shot Transfer via Unsupervised Environment Design

About

A wide range of reinforcement learning (RL) problems - including robustness, transfer learning, unsupervised RL, and emergent complexity - require specifying a distribution of tasks or environments in which a policy will be trained. However, creating a useful distribution of environments is error prone, and takes a significant amount of developer time and effort. We propose Unsupervised Environment Design (UED) as an alternative paradigm, where developers provide environments with unknown parameters, and these parameters are used to automatically produce a distribution over valid, solvable environments. Existing approaches to automatically generating environments suffer from common failure modes: domain randomization cannot generate structure or adapt the difficulty of the environment to the agent's learning progress, and minimax adversarial training leads to worst-case environments that are often unsolvable. To generate structured, solvable environments for our protagonist agent, we introduce a second, antagonist agent that is allied with the environment-generating adversary. The adversary is motivated to generate environments which maximize regret, defined as the difference between the protagonist and antagonist agent's return. We call our technique Protagonist Antagonist Induced Regret Environment Design (PAIRED). Our experiments demonstrate that PAIRED produces a natural curriculum of increasingly complex environments, and PAIRED agents achieve higher zero-shot transfer performance when tested in highly novel environments.

Michael Dennis, Natasha Jaques, Eugene Vinitsky, Alexandre Bayen, Stuart Russell, Andrew Critch, Sergey Levine• 2020

Related benchmarks

TaskDatasetResultRank
NavigationMiniWorld FourRooms
Success Rate51
15
password navigationMiniWoB (test)
Success Rate1.00e+4
10
Continuous ControlMuJoCo HalfCheetah Vel (test)
Mean Return-545
9
Meta-Reinforcement LearningMuJoCo HalfCheetah Velocity variation (test)
CVaR 0.05 Return-725
7
Continuous ControlMuJoCo HalfCheetah Mass (test)
Mean Return662
7
Continuous ControlMuJoCo HalfCheetah Body (test)
Mean Return492
7
Continuous ControlMuJoCo HalfCheetah 10D-task (a)
Mean Return551
7
Continuous ControlMuJoCo HalfCheetah 10D-task (b)
Mean Return706
7
Continuous ControlMuJoCo HalfCheetah 10D-task (c)
Mean Return561
7
Meta-Reinforcement LearningMuJoCo HalfCheetah Mass variation (test)
CVaR 0.05 Return438
7
Showing 10 of 36 rows

Other info

Follow for update