Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Efficient Off-Policy Meta-Reinforcement Learning via Probabilistic Context Variables

About

Deep reinforcement learning algorithms require large amounts of experience to learn an individual task. While in principle meta-reinforcement learning (meta-RL) algorithms enable agents to learn new skills from small amounts of experience, several major challenges preclude their practicality. Current methods rely heavily on on-policy experience, limiting their sample efficiency. The also lack mechanisms to reason about task uncertainty when adapting to new tasks, limiting their effectiveness in sparse reward problems. In this paper, we address these challenges by developing an off-policy meta-RL algorithm that disentangles task inference and control. In our approach, we perform online probabilistic filtering of latent task variables to infer how to solve a new task from small amounts of experience. This probabilistic interpretation enables posterior sampling for structured and efficient exploration. We demonstrate how to integrate these task variables with off-policy RL algorithms to achieve both meta-training and adaptation efficiency. Our method outperforms prior algorithms in sample efficiency by 20-100X as well as in asymptotic performance on several meta-RL benchmarks.

Kate Rakelly, Aurick Zhou, Deirdre Quillen, Chelsea Finn, Sergey Levine• 2019

Related benchmarks

TaskDatasetResultRank
Offline Meta-Reinforcement LearningWalker-Rand-Params sampled 10 unseen (test)
Average Return284.5
10
Offline Meta-Reinforcement LearningPoint-Robot sampled 10 unseen (test)
Average Return-17
10
Offline Meta-Reinforcement LearningHalf-Cheetah-Vel sampled 10 unseen (test)
Average Return-133.7
10
Continuous ControlMuJoCo HalfCheetah Vel (test)
Mean Return-534
9
Actuator InversionCheetah (train)
AER428
8
Actuator InversionCheetah Ceval-in (eval-in)
AER430
8
Reinforcement LearningAnt-gravity v2
Average Return4.07e+3
8
Reinforcement LearningWalker2d gravity v2
Average Return4.28e+3
8
Zero-Shot Actuator InversionAIB Cheetah environment Ceval-out
AER208
8
Zero-Shot Actuator InversionAIB Reacher E environment Ceval-out
AER799
8
Showing 10 of 72 rows
...

Other info

Follow for update