Amortizing Trajectory Diffusion with Keyed Drift Fields
About
Diffusion-based trajectory planners can synthesize rich, multimodal action sequences for offline reinforcement learning, but their iterative denoising incurs substantial inference-time cost, making closed-loop planning slow under tight compute budgets. We study the problem of achieving diffusion-like trajectory planning behavior with one-step inference, while retaining the ability to sample diverse candidate plans and condition on the current state in a receding-horizon control loop. Our key observation is that conditional trajectory generation fails under na\"ive distribution-matching objectives when the similarity measure used to align generated trajectories with the dataset is dominated by unconstrained future dimensions. In practice, this causes attraction toward average trajectories, collapses action diversity, and yields near-static behavior. Our key insight is that conditional generative planning requires a conditioning-aware notion of neighborhood: trajectory updates should be computed using distances in a compact key space that reflects the condition, while still applying updates in the full trajectory space. Building on this, we introduce Keyed Drifting Policies (KDP), a one-step trajectory generator trained with a drift-field objective that attracts generated trajectories toward condition-matched dataset windows and repels them from nearby generated samples, using a stop-gradient drifted target to amortize iterative refinement into training. At inference, the resulting policy produces a full trajectory window in a single forward pass. Across standard RL benchmarks and real-time hardware deployments, KDP achieves strong performance with one-step inference and substantially lower planning latency than diffusion sampling. Project website, code and videos: https://keyed-drifting.github.io/
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Offline Reinforcement Learning | D4RL halfcheetah-medium-expert | Normalized Score92.5 | 155 | |
| Offline Reinforcement Learning | D4RL hopper-medium-expert | Normalized Score103.6 | 153 | |
| Offline Reinforcement Learning | D4RL walker2d-medium-expert | Normalized Score108.5 | 124 | |
| Offline Reinforcement Learning | D4RL Medium HalfCheetah | Normalized Score62.1 | 97 | |
| Offline Reinforcement Learning | D4RL Medium Walker2d | Normalized Score87.2 | 96 | |
| Offline Reinforcement Learning | D4RL Medium Hopper | Normalized Score90.3 | 64 | |
| Hand Manipulation | Adroit door-cloned | -- | 23 | |
| Relocate | Adroit Relocate Cloned v0 | Normalized Score62.8 | 21 | |
| Offline Reinforcement Learning | D4RL Locomotion Suite | Average Normalized Score90.7 | 19 | |
| Goal-Conditioned Trajectory Planning | Maze2D U-Maze | Success Score122.3 | 8 |