Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Draft-and-Target Sampling for Video Generation Policy

About

Video generation models have been used as a robot policy to predict the future states of executing a task conditioned on task description and observation. Previous works ignore their high computational cost and long inference time. To address this challenge, we propose Draft-and-Target Sampling, a novel diffusion inference paradigm for video generation policy that is training-free and can improve inference efficiency. We introduce a self-play denoising approach by utilizing two complementary denoising trajectories in a single model, draft sampling takes large steps to generate a global trajectory in a fast manner and target sampling takes small steps to verify it. To further speedup generation, we introduce token chunking and progressive acceptance strategy to reduce redundant computation. Experiments on three benchmarks show that our method can achieve up to 2.1x speedup and improve the efficiency of current state-of-the-art methods with minimal compromise to the success rate. Our code is available.

Qikang Zhang, Yingjie Lei, Wei Liu, Daochang Liu• 2026

Related benchmarks

TaskDatasetResultRank
Embodied Visual NavigationiTHOR
Success Rate (Kitchen)30
8
Robot ManipulationMeta-World
Door Open Success Rate69.3
6
Goal-conditioned Robot ManipulationLIBERO (test)
Success Rate: Put Red Mug Left30
6
Showing 3 of 3 rows

Other info

Follow for update