Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Robo3R: Enhancing Robotic Manipulation with Accurate Feed-Forward 3D Reconstruction

About

3D spatial perception is fundamental to generalizable robotic manipulation, yet obtaining reliable, high-quality 3D geometry remains challenging. Depth sensors suffer from noise and material sensitivity, while existing reconstruction models lack the precision and metric consistency required for physical interaction. We introduce Robo3R, a feed-forward, manipulation-ready 3D reconstruction model that predicts accurate, metric-scale scene geometry directly from RGB images and robot states in real time. Robo3R jointly infers scale-invariant local geometry and relative camera poses, which are unified into the scene representation in the canonical robot frame via a learned global similarity transformation. To meet the precision demands of manipulation, Robo3R employs a masked point head for sharp, fine-grained point clouds, and a keypoint-based Perspective-n-Point (PnP) formulation to refine camera extrinsics and global alignment. Trained on Robo3R-4M, a curated large-scale synthetic dataset with four million high-fidelity annotated frames, Robo3R consistently outperforms state-of-the-art reconstruction methods and depth sensors. Across downstream tasks including imitation learning, sim-to-real transfer, grasp synthesis, and collision-free motion planning, we observe consistent gains in performance, suggesting the promise of this alternative 3D sensing module for robotic manipulation.

Sizhe Yang, Linning Xu, Hao Li, Juncheng Mu, Jia Zeng, Dahua Lin, Jiangmiao Pang• 2026

Related benchmarks

TaskDatasetResultRank
Point Map EstimationRobo3R Real-world (Monocular) 1.0 (test)
Point Error0.006
5
Point Map EstimationRobo3R Real-world (Binocular) 1.0 (test)
Point Error0.005
5
Relative camera pose predictionRobo3R real-world benchmark
RTE0.014
5
Showing 3 of 3 rows

Other info

Follow for update