Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Amortizing Trajectory Diffusion with Keyed Drift Fields

About

Diffusion-based trajectory planners can synthesize rich, multimodal action sequences for offline reinforcement learning, but their iterative denoising incurs substantial inference-time cost, making closed-loop planning slow under tight compute budgets. We study the problem of achieving diffusion-like trajectory planning behavior with one-step inference, while retaining the ability to sample diverse candidate plans and condition on the current state in a receding-horizon control loop. Our key observation is that conditional trajectory generation fails under na\"ive distribution-matching objectives when the similarity measure used to align generated trajectories with the dataset is dominated by unconstrained future dimensions. In practice, this causes attraction toward average trajectories, collapses action diversity, and yields near-static behavior. Our key insight is that conditional generative planning requires a conditioning-aware notion of neighborhood: trajectory updates should be computed using distances in a compact key space that reflects the condition, while still applying updates in the full trajectory space. Building on this, we introduce Keyed Drifting Policies (KDP), a one-step trajectory generator trained with a drift-field objective that attracts generated trajectories toward condition-matched dataset windows and repels them from nearby generated samples, using a stop-gradient drifted target to amortize iterative refinement into training. At inference, the resulting policy produces a full trajectory window in a single forward pass. Across standard RL benchmarks and real-time hardware deployments, KDP achieves strong performance with one-step inference and substantially lower planning latency than diffusion sampling. Project website, code and videos: https://keyed-drifting.github.io/

Gokul Puthumanaillam, Melkior Ornik• 2026

Related benchmarks

TaskDatasetResultRank
Offline Reinforcement LearningD4RL halfcheetah-medium-expert
Normalized Score92.5
155
Offline Reinforcement LearningD4RL hopper-medium-expert
Normalized Score103.6
153
Offline Reinforcement LearningD4RL walker2d-medium-expert
Normalized Score108.5
124
Offline Reinforcement LearningD4RL Medium HalfCheetah
Normalized Score62.1
97
Offline Reinforcement LearningD4RL Medium Walker2d
Normalized Score87.2
96
Offline Reinforcement LearningD4RL Medium Hopper
Normalized Score90.3
64
Hand ManipulationAdroit door-cloned--
23
RelocateAdroit Relocate Cloned v0
Normalized Score62.8
21
Offline Reinforcement LearningD4RL Locomotion Suite
Average Normalized Score90.7
19
Goal-Conditioned Trajectory PlanningMaze2D U-Maze
Success Score122.3
8
Showing 10 of 17 rows

Other info

Follow for update