Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

In-Context Planning with Latent Temporal Abstractions

About

Planning-based reinforcement learning for continuous control is bottlenecked by two practical issues: planning at primitive time scales leads to prohibitive branching and long horizons, while real environments are frequently partially observable and exhibit regime shifts that invalidate stationary, fully observed dynamics assumptions. We introduce I-TAP (In-Context Latent Temporal-Abstraction Planner), an offline RL framework that unifies in-context adaptation with online planning in a learned discrete temporal-abstraction space. From offline trajectories, I-TAP learns an observation-conditioned residual-quantization VAE that compresses each observation-macro-action segment into a coarse-to-fine stack of discrete residual tokens, and a temporal Transformer that autoregressively predicts these token stacks from a short recent history. The resulting sequence model acts simultaneously as a context-conditioned prior over abstract actions and a latent dynamics model. At test time, I-TAP performs Monte Carlo Tree Search directly in token space, using short histories for implicit adaptation without gradient update, and decodes selected token stacks into executable actions. Across deterministic MuJoCo, stochastic MuJoCo with per-episode latent dynamics regimes, and high-dimensional Adroit manipulation, including partially observable variants, I-TAP consistently matches or outperforms strong model-free and model-based offline baselines, demonstrating efficient and robust in-context planning under stochastic dynamics and partial observability.

Baiting Luo, Yunuo Zhang, Nathaniel S. Keplinger, Samir Gupta, Abhishek Dubey, Ayan Mukhopadhyay• 2026

Related benchmarks

TaskDatasetResultRank
Offline Reinforcement Learninghopper medium
Normalized Score74.6
52
Offline Reinforcement Learningwalker2d medium
Normalized Score76.56
51
Offline Reinforcement Learningwalker2d medium-replay
Normalized Score75.11
50
Offline Reinforcement Learninghopper medium-replay
Normalized Score84.43
44
Offline Reinforcement LearningWalker2d medium-expert
Normalized Score98.8
31
Offline Reinforcement LearningHopper medium-expert
Normalized Score106.9
24
Offline Reinforcement LearningHopper Medium (Noise 5)
Normalized Return70.67
14
Offline Reinforcement LearningHopper Medium Noise 0
Normalized Return86.57
14
Offline Reinforcement LearningHopper Medium-Expert Noise 5
Normalized Return0.8287
7
Offline Reinforcement LearningHopper Medium-Expert (Noise 0)
Normalized Return111.7
7
Showing 10 of 27 rows

Other info

Follow for update