Towards an Information Theoretic Framework of Context-Based Offline Meta-Reinforcement Learning
About
As a marriage between offline RL and meta-RL, the advent of offline meta-reinforcement learning (OMRL) has shown great promise in enabling RL agents to multi-task and quickly adapt while acquiring knowledge safely. Among which, context-based OMRL (COMRL) as a popular paradigm, aims to learn a universal policy conditioned on effective task representations. In this work, by examining several key milestones in the field of COMRL, we propose to integrate these seemingly independent methodologies into a unified framework. Most importantly, we show that the pre-existing COMRL algorithms are essentially optimizing the same mutual information objective between the task variable $M$ and its latent representation $Z$ by implementing various approximate bounds. Such theoretical insight offers ample design freedom for novel algorithms. As demonstrations, we propose a supervised and a self-supervised implementation of $I(Z; M)$, and empirically show that the corresponding optimization algorithms exhibit remarkable generalization across a broad spectrum of RL benchmarks, context shift scenarios, data qualities and deep learning architectures. This work lays the information theoretic foundation for COMRL methods, leading to a better understanding of task representation learning in the context of reinforcement learning. Given its generality, we envision our framework as a promising offline pre-training paradigm of foundation models for decision making.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Reinforcement Learning | Ant-Dir Random IID | Average Return81 | 8 | |
| Reinforcement Learning | Ant-Dir Random OOD | Average Return62 | 8 | |
| Reinforcement Learning | Ant-Dir Medium IID | Average Return220 | 8 | |
| Reinforcement Learning | Ant-Dir Medium OOD | Average Return243 | 8 | |
| Reinforcement Learning | Ant-Dir Expert IID | Average Return279 | 8 | |
| Reinforcement Learning | Ant-Dir Expert OOD | Average Return262 | 8 | |
| Offline Meta-Reinforcement Learning | Walker-friction (out-of-distribution) | Average Return484.6 | 6 | |
| Offline Meta-Reinforcement Learning | MuJoCo Ant-dir In-distribution | Average Return812.9 | 6 | |
| Offline Meta-Reinforcement Learning | MuJoCo Cheetah-speed In-distribution | Average Return586.4 | 6 | |
| Offline Meta-Reinforcement Learning | Contextual-DMC Finger-LS In-distribution | Average Return885.6 | 6 |