Contextual Latent World Models for Offline Meta Reinforcement Learning
About
Offline meta-reinforcement learning seeks to learn policies that generalize across related tasks from fixed datasets. Context-based methods infer a task representation from transition histories, but learning effective task representations without supervision remains a challenge. In parallel, latent world models have demonstrated strong self-supervised representation learning through temporal consistency. We introduce contextual latent world models, which condition latent world models on inferred task representations and train them jointly with the context encoder. This enforces task-conditioned temporal consistency, yielding task representations that capture task-dependent dynamics rather than merely discriminating between tasks. Our method learns more expressive task representations and significantly improves generalization to unseen tasks across MuJoCo, Contextual-DeepMind Control, and Meta-World benchmarks.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Ant-dir | MuJoCo in-distribution | Average Return726.7 | 6 | |
| Cheetah-LS | Contextual-DMC (in-distribution) | Average Return935 | 6 | |
| Cheetah-speed | Contextual-DMC (in-distribution) | Average Return706.4 | 6 | |
| Finger-LS | Contextual-DMC (in-distribution) | Average Return972 | 6 | |
| Finger-speed | Contextual-DMC (in-distribution) | Average Return943.3 | 6 | |
| Hopper-mass | MuJoCo in-distribution | Average Return566 | 6 | |
| Meta-Reinforcement Learning | Meta-World in-distribution v2 (test) | Assembly Success Rate0.00e+0 | 6 | |
| Offline Meta-Reinforcement Learning | MuJoCo Ant-dir In-distribution | Average Return863.1 | 6 | |
| Offline Meta-Reinforcement Learning | MuJoCo Cheetah-LS In-distribution | Average Return944.8 | 6 | |
| Offline Meta-Reinforcement Learning | MuJoCo Cheetah-speed In-distribution | Average Return751.2 | 6 |