ReViP: Reducing False Completion in Vision-Language-Action Models with Vision-Proprioception Rebalance
About
Vision-Language-Action (VLA) models have advanced robotic manipulation by combining vision, language, and proprioception to predict actions. However, previous methods fuse proprioceptive signals directly with VLM-encoded vision-language features, resulting in state-dominant bias and false completions despite visible execution failures. We attribute this to modality imbalance, where policies over-rely on internal state while underusing visual evidence. To address this, we present ReViP, a novel VLA framework with Vision-Proprioception Rebalance to enhance visual grounding and robustness under perturbations. The key insight is to introduce auxiliary task-aware environment priors to adaptively modulate the coupling between semantic perception and proprioceptive dynamics. Specifically, we use an external VLM as a task-stage observer to extract real-time task-centric visual cues from visual observations, which drive a Vision-Proprioception Feature-wise Linear Modulation to enhance environmental awareness and reduce state-driven errors. Moreover, to evaluate false completion, we propose the first False-Completion Benchmark Suite built on LIBERO with controlled settings such as Object-Drop. Extensive experiments show that ReViP effectively reduces false-completion rates and improves success rates over strong VLA baselines on our suite, with gains extending to LIBERO, RoboTwin 2.0, and real-world evaluations.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Robot Manipulation | LIBERO | Goal Achievement96.6 | 494 | |
| Robotic Manipulation | Real-world Robotic Manipulation (test) | Success Rate60 | 7 | |
| Robot Manipulation | False-Completion Benchmark Suite | Object-Drop: Butter SR50 | 6 | |
| Robotic Manipulation | Dual-Arm RoboTwin Hard mode 2.0 | SR (Place Object Stand)20 | 4 | |
| Robot Manipulation | Extended Real-World Evaluation Aggregate | Average Success Rate (SR)73 | 3 |