Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Towards an Information Theoretic Framework of Context-Based Offline Meta-Reinforcement Learning

About

As a marriage between offline RL and meta-RL, the advent of offline meta-reinforcement learning (OMRL) has shown great promise in enabling RL agents to multi-task and quickly adapt while acquiring knowledge safely. Among which, context-based OMRL (COMRL) as a popular paradigm, aims to learn a universal policy conditioned on effective task representations. In this work, by examining several key milestones in the field of COMRL, we propose to integrate these seemingly independent methodologies into a unified framework. Most importantly, we show that the pre-existing COMRL algorithms are essentially optimizing the same mutual information objective between the task variable $M$ and its latent representation $Z$ by implementing various approximate bounds. Such theoretical insight offers ample design freedom for novel algorithms. As demonstrations, we propose a supervised and a self-supervised implementation of $I(Z; M)$, and empirically show that the corresponding optimization algorithms exhibit remarkable generalization across a broad spectrum of RL benchmarks, context shift scenarios, data qualities and deep learning architectures. This work lays the information theoretic foundation for COMRL methods, leading to a better understanding of task representation learning in the context of reinforcement learning. Given its generality, we envision our framework as a promising offline pre-training paradigm of foundation models for decision making.

Lanqing Li, Hai Zhang, Xinyu Zhang, Shatong Zhu, Yang Yu, Junqiao Zhao, Pheng-Ann Heng• 2024

Related benchmarks

TaskDatasetResultRank
Reinforcement LearningAnt-Dir Random IID
Average Return81
8
Reinforcement LearningAnt-Dir Random OOD
Average Return62
8
Reinforcement LearningAnt-Dir Medium IID
Average Return220
8
Reinforcement LearningAnt-Dir Medium OOD
Average Return243
8
Reinforcement LearningAnt-Dir Expert IID
Average Return279
8
Reinforcement LearningAnt-Dir Expert OOD
Average Return262
8
Offline Meta-Reinforcement LearningWalker-friction (out-of-distribution)
Average Return484.6
6
Offline Meta-Reinforcement LearningMuJoCo Ant-dir In-distribution
Average Return812.9
6
Offline Meta-Reinforcement LearningMuJoCo Cheetah-speed In-distribution
Average Return586.4
6
Offline Meta-Reinforcement LearningContextual-DMC Finger-LS In-distribution
Average Return885.6
6
Showing 10 of 34 rows

Other info

Code

Follow for update