Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Any3D-VLA: Enhancing VLA Robustness via Diverse Point Clouds

About

Existing Vision-Language-Action (VLA) models typically take 2D images as visual input, which limits their spatial understanding in complex scenes. How can we incorporate 3D information to enhance VLA capabilities? We conduct a pilot study across different observation spaces and visual representations. The results show that explicitly lifting visual input into point clouds yields representations that better complement their corresponding 2D representations. To address the challenges of (1) scarce 3D data and (2) the domain gap induced by cross-environment differences and depth-scale biases, we propose Any3D-VLA. It unifies the simulator, sensor, and model-estimated point clouds within a training pipeline, constructs diverse inputs, and learns domain-agnostic 3D representations that are fused with the corresponding 2D representations. Simulation and real-world experiments demonstrate Any3D-VLA's advantages in improving performance and mitigating the domain gap. Our project homepage is available at https://xianzhefan.github.io/Any3D-VLA.github.io.

Xianzhe Fan, Shengliang Deng, Xiaoyang Wu, Yuxiang Lu, Zhuoling Li, Mi Yan, Yujia Zhang, Zhizheng Zhang, He Wang, Hengshuang Zhao• 2026

Related benchmarks

TaskDatasetResultRank
Robot ManipulationLIBERO (test)
Average Success Rate68.5
142
Long-horizon robot manipulationCALVIN
Task Completion Rate (1)72.7
15
Robot ManipulationReal-world post-training dataset Task 1: Move pink tulip to vase 1.0 (test)
Success Rate93.3
7
Robot ManipulationReal-world post-training dataset Task 2: Move condiment cup into slot 1.0 (test)
Success Rate86.7
7
Showing 4 of 4 rows

Other info

Follow for update