In-Context Planning with Latent Temporal Abstractions
About
Planning-based reinforcement learning for continuous control is bottlenecked by two practical issues: planning at primitive time scales leads to prohibitive branching and long horizons, while real environments are frequently partially observable and exhibit regime shifts that invalidate stationary, fully observed dynamics assumptions. We introduce I-TAP (In-Context Latent Temporal-Abstraction Planner), an offline RL framework that unifies in-context adaptation with online planning in a learned discrete temporal-abstraction space. From offline trajectories, I-TAP learns an observation-conditioned residual-quantization VAE that compresses each observation-macro-action segment into a coarse-to-fine stack of discrete residual tokens, and a temporal Transformer that autoregressively predicts these token stacks from a short recent history. The resulting sequence model acts simultaneously as a context-conditioned prior over abstract actions and a latent dynamics model. At test time, I-TAP performs Monte Carlo Tree Search directly in token space, using short histories for implicit adaptation without gradient update, and decodes selected token stacks into executable actions. Across deterministic MuJoCo, stochastic MuJoCo with per-episode latent dynamics regimes, and high-dimensional Adroit manipulation, including partially observable variants, I-TAP consistently matches or outperforms strong model-free and model-based offline baselines, demonstrating efficient and robust in-context planning under stochastic dynamics and partial observability.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Offline Reinforcement Learning | hopper medium | Normalized Score74.6 | 52 | |
| Offline Reinforcement Learning | walker2d medium | Normalized Score76.56 | 51 | |
| Offline Reinforcement Learning | walker2d medium-replay | Normalized Score75.11 | 50 | |
| Offline Reinforcement Learning | hopper medium-replay | Normalized Score84.43 | 44 | |
| Offline Reinforcement Learning | Walker2d medium-expert | Normalized Score98.8 | 31 | |
| Offline Reinforcement Learning | Hopper medium-expert | Normalized Score106.9 | 24 | |
| Offline Reinforcement Learning | Hopper Medium (Noise 5) | Normalized Return70.67 | 14 | |
| Offline Reinforcement Learning | Hopper Medium Noise 0 | Normalized Return86.57 | 14 | |
| Offline Reinforcement Learning | Hopper Medium-Expert Noise 5 | Normalized Return0.8287 | 7 | |
| Offline Reinforcement Learning | Hopper Medium-Expert (Noise 0) | Normalized Return111.7 | 7 |