Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Envision: Embodied Visual Planning via Goal-Imagery Video Diffusion

About

Embodied visual planning aims to enable manipulation tasks by imagining how a scene evolves toward a desired goal and using the imagined trajectories to guide actions. Video diffusion models, through their image-to-video generation capability, provide a promising foundation for such visual imagination. However, existing approaches are largely forward predictive, generating trajectories conditioned on the initial observation without explicit goal modeling, thus often leading to spatial drift and goal misalignment. To address these challenges, we propose Envision, a diffusion-based framework that performs visual planning for embodied agents. By explicitly constraining the generation with a goal image, our method enforces physical plausibility and goal consistency throughout the generated trajectory. Specifically, Envision operates in two stages. First, a Goal Imagery Model identifies task-relevant regions, performs region-aware cross attention between the scene and the instruction, and synthesizes a coherent goal image that captures the desired outcome. Then, an Env-Goal Video Model, built upon a first-and-last-frame-conditioned video diffusion model (FL2V), interpolates between the initial observation and the goal image, producing smooth and physically plausible video trajectories that connect the start and goal states. Experiments on object manipulation and image editing benchmarks demonstrate that Envision achieves superior goal alignment, spatial consistency, and object preservation compared to baselines. The resulting visual plans can directly support downstream robotic planning and control, providing reliable guidance for embodied agents.

Yuming Gu, Yizhi Wang, Yining Hong, Yipeng Gao, Hao Jiang, Angtian Wang, Bo Liu, Nathaniel S. Dennler, Zhengfei Kuang, Hao Li, Gordon Wetzstein, Chongyang Ma• 2025

Related benchmarks

TaskDatasetResultRank
Goal-image generationTaste-Rob
LPIPS0.09
5
Goal-image generationRT-1
LPIPS0.2
5
Planning Video GenerationTaste-Rob (random 200 examples)
FVD8.21
3
Planning Video GenerationRT1 (first 100 videos)
FVD9.95
3
Robot execution performanceMixed dataset (IsaacGym and robomimic) (test)
Block Sorting100
3
Showing 5 of 5 rows

Other info

Follow for update