Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

LaMP: Learning Vision-Language-Action Policies with 3D Scene Flow as Latent Motion Prior

About

We introduce \textbf{LaMP}, a dual-expert Vision-Language-Action framework that embeds dense 3D scene flow as a latent motion prior for robotic manipulation. Existing VLA models regress actions directly from 2D semantic visual features, forcing them to learn complex 3D physical interactions implicitly. This implicit learning strategy degrades under unfamiliar spatial dynamics. LaMP addresses this limitation by aligning a flow-matching \emph{Motion Expert} with a policy-predicting \emph{Action Expert} through gated cross-attention. Specifically, the Motion Expert generates a one-step partially denoised 3D scene flow, and its hidden states condition the Action Expert without full multi-step reconstruction. We evaluate LaMP on the LIBERO, LIBERO-Plus, and SimplerEnv-WidowX simulation benchmarks as well as real-world experiments. LaMP consistently outperforms evaluated VLA baselines across LIBERO, LIBERO-Plus, and SimplerEnv-WidowX benchmarks, achieving the highest reported average success rates under the same training budgets. On LIBERO-Plus OOD perturbations, LaMP shows improved robustness with an average 9.7% gain over the strongest prior baseline. Our project page is available at https://summerwxk.github.io/lamp-project-page/.

Xinkai Wang, Chenyi Wang, Yifu Xu, Mingzhe Ye, Fu-Cheng Zhang, Jialin Tian, Xinyu Zhan, Lifeng Zhu, Cewu Lu, Lixin Yang• 2026

Related benchmarks

TaskDatasetResultRank
Robot ManipulationSimplerEnv WidowX
Success Rate: Put Spoon on Towel79.1
58
Robotic ManipulationLIBERO Spatial Object Goal Long
Overall Success Rate (Long)96.7
31
Robot ManipulationLIBERO-Plus Zero-shot
Camera Score64.5
28
Showing 3 of 3 rows

Other info

Follow for update