Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Think Proprioceptively: Embodied Visual Reasoning for VLA Manipulation

About

Vision-language-action (VLA) models typically inject proprioception only as a late conditioning signal, which prevents robot state from shaping instruction understanding and from influencing which visual tokens are attended throughout the policy. We introduce ThinkProprio, which converts proprioception into a sequence of text tokens in the VLM embedding space and fuses them with the task instruction at the input. This early fusion lets embodied state participate in subsequent visual reasoning and token selection, biasing computation toward action-critical evidence while suppressing redundant visual tokens. In a systematic ablation over proprioception encoding, state entry point, and action-head conditioning, we find that text tokenization is more effective than learned projectors, and that retaining roughly 15% of visual tokens can match the performance of using the full token set. Across CALVIN, LIBERO, and real-world manipulation, ThinkProprio matches or improves over strong baselines while reducing end-to-end inference latency over 50%.

Fangyuan Wang, Peng Zhou, Jiaming Qi, Shipeng Lyu, David Navarro-Alarcon, Guodong Guo• 2026

Related benchmarks

TaskDatasetResultRank
Robot ManipulationLIBERO
Goal Achievement98
494
Long-horizon robot manipulationCalvin ABCD→D
Task 1 Completion Rate97.7
96
Robot Manipulation8 held-out robot manipulation tasks (test)
Success Rate91.3
12
Long-horizon task successCALVIN D→D long-horizon
Success Rate (LH-1)99.5
11
Language-conditioned imitation learningLIBERO (test)
Spatial Score97.6
8
Showing 5 of 5 rows

Other info

Follow for update