IPD: Boosting Sequential Policy with Imaginary Planning Distillation in Offline Reinforcement Learning
About
Decision transformer based sequential policies have emerged as a powerful paradigm in offline reinforcement learning (RL), yet their efficacy remains constrained by the quality of static datasets and inherent architectural limitations. Specifically, these models often struggle to effectively integrate suboptimal experiences and fail to explicitly plan for an optimal policy. To bridge this gap, we propose \textbf{Imaginary Planning Distillation (IPD)}, a novel framework that seamlessly incorporates offline planning into data generation, supervised training, and online inference. Our framework first learns a world model equipped with uncertainty measures and a quasi-optimal value function from the offline data. These components are utilized to identify suboptimal trajectories and augment them with reliable, imagined optimal rollouts generated via Model Predictive Control (MPC). A Transformer-based sequential policy is then trained on this enriched dataset, complemented by a value-guided objective that promotes the distillation of the optimal policy. By replacing the conventional, manually-tuned return-to-go with the learned quasi-optimal value function, IPD improves both decision-making stability and performance during inference. Empirical evaluations on the D4RL benchmark demonstrate that IPD significantly outperforms several state-of-the-art value-based and transformer-based offline RL methods across diverse tasks.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Offline Reinforcement Learning | D4RL Gym walker2d (medium-replay) | Normalized Return96.2 | 68 | |
| Offline Reinforcement Learning | D4RL Gym halfcheetah-medium | Normalized Return51.2 | 60 | |
| Offline Reinforcement Learning | D4RL Gym walker2d medium | Normalized Return89.5 | 58 | |
| Offline Reinforcement Learning | D4RL Gym hopper (medium-replay) | Normalized Return103.2 | 44 | |
| Offline Reinforcement Learning | D4RL Gym halfcheetah-medium-replay | Normalized Average Return49.9 | 43 | |
| Offline Reinforcement Learning | D4RL Gym hopper-medium | Normalized Return81.6 | 41 | |
| Offline Reinforcement Learning | D4RL Kitchen-Partial | Normalized Performance74.3 | 19 | |
| Offline Reinforcement Learning | D4RL Kitchen (kitchen-complete) | Normalized Score78.4 | 9 | |
| Offline Reinforcement Learning | D4RL Adroit pen-cloned v1 | Normalized Score92.8 | 9 | |
| Offline Reinforcement Learning | D4RL Adroit hammer-human v1 | Normalized Score2.29e+3 | 9 |