Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Robust Task Representations for Offline Meta-Reinforcement Learning via Contrastive Learning

About

We study offline meta-reinforcement learning, a practical reinforcement learning paradigm that learns from offline data to adapt to new tasks. The distribution of offline data is determined jointly by the behavior policy and the task. Existing offline meta-reinforcement learning algorithms cannot distinguish these factors, making task representations unstable to the change of behavior policies. To address this problem, we propose a contrastive learning framework for task representations that are robust to the distribution mismatch of behavior policies in training and test. We design a bi-level encoder structure, use mutual information maximization to formalize task representation learning, derive a contrastive learning objective, and introduce several approaches to approximate the true distribution of negative pairs. Experiments on a variety of offline meta-reinforcement learning benchmarks demonstrate the advantages of our method over prior methods, especially on the generalization to out-of-distribution behavior policies. The code is available at https://github.com/PKU-AI-Edge/CORRO.

Haoqi Yuan, Zongqing Lu• 2022

Related benchmarks

TaskDatasetResultRank
Offline Meta-Reinforcement LearningPoint-Robot sampled 10 unseen (test)
Average Return-7.8
10
Offline Meta-Reinforcement LearningWalker-Rand-Params sampled 10 unseen (test)
Average Return312.5
10
Offline Meta-Reinforcement LearningHalf-Cheetah-Vel sampled 10 unseen (test)
Average Return-65.6
10
Reinforcement LearningAnt-Dir Random OOD
Average Return0.00e+0
8
Reinforcement LearningAnt-Dir Random IID
Average Return1
8
Reinforcement LearningAnt-Dir Medium IID
Average Return8
8
Reinforcement LearningAnt-Dir Medium OOD
Average Return-7
8
Reinforcement LearningAnt-Dir Expert IID
Average Return-4
8
Reinforcement LearningAnt-Dir Expert OOD
Average Return-14
8
Showing 9 of 9 rows

Other info

Follow for update