Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Learning Long-Context Diffusion Policies via Past-Token Prediction

About

Reasoning over long sequences of observations and actions is essential for many robotic tasks. Yet, learning effective long-context policies from demonstrations remains challenging. As context length increases, training becomes increasingly expensive due to rising memory demands, and policy performance often degrades as a result of spurious correlations. Recent methods typically sidestep these issues by truncating context length, discarding historical information that may be critical for subsequent decisions. In this paper, we propose an alternative approach that explicitly regularizes the retention of past information. We first revisit the copycat problem in imitation learning and identify an opposite challenge in recent diffusion policies: rather than over-relying on prior actions, they often fail to capture essential dependencies between past and future actions. To address this, we introduce Past-Token Prediction (PTP), an auxiliary task in which the policy learns to predict past action tokens alongside future ones. This regularization significantly improves temporal modeling in the policy head, with minimal reliance on visual representations. Building on this observation, we further introduce a multistage training strategy: pre-train the visual encoder with short contexts, and fine-tune the policy head using cached long-context embeddings. This strategy preserves the benefits of PTP while greatly reducing memory and computational overhead. Finally, we extend PTP into a self-verification mechanism at test time, enabling the policy to score and select candidates consistent with past actions during inference. Experiments across four real-world and six simulated tasks demonstrate that our proposed method improves the performance of long-context diffusion policies by 3x and accelerates policy training by more than 10x.

Marcel Torne, Andy Tang, Yuejiang Liu, Chelsea Finn• 2025

Related benchmarks

TaskDatasetResultRank
Mug ReplacementReal Robot
Success Rate4.00e+3
4
Stacking PuzzleReal Robot
Success Rate52
4
Buttons in SequenceReal-world robotic manipulation tasks
Success Rate51
4
Exchange ObjectsReal-world robotic manipulation tasks
Success Rate62
4
Hold the Pot LidReal-world robotic manipulation tasks
Success Rate55
4
MarshmallowsReal Robot
Success Rate35
4
Overall Average PerformanceReal-world robotic manipulation tasks
Avg Success Rate0.61
4
Sponge and SquareReal-world robotic manipulation tasks
Success Rate77
4
Wipe the Table OnceReal-world robotic manipulation tasks
Success Rate70
4
Wipe the Table TwiceReal-world robotic manipulation tasks
Success Rate56
4
Showing 10 of 12 rows

Other info

Follow for update