SLAM3R: Real-Time Dense Scene Reconstruction from Monocular RGB Videos
About
In this paper, we introduce SLAM3R, a novel and effective system for real-time, high-quality, dense 3D reconstruction using RGB videos. SLAM3R provides an end-to-end solution by seamlessly integrating local 3D reconstruction and global coordinate registration through feed-forward neural networks. Given an input video, the system first converts it into overlapping clips using a sliding window mechanism. Unlike traditional pose optimization-based methods, SLAM3R directly regresses 3D pointmaps from RGB images in each window and progressively aligns and deforms these local pointmaps to create a globally consistent scene reconstruction - all without explicitly solving any camera parameters. Experiments across datasets consistently show that SLAM3R achieves state-of-the-art reconstruction accuracy and completeness while maintaining real-time performance at 20+ FPS. Code available at: https://github.com/PKU-VCL-3DV/SLAM3R.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| 3D Scene Reconstruction | 7-Scenes (test) | Accuracy2.4 | 27 | |
| Camera pose estimation | 7Scenes (test) | Chess Error6.2 | 16 | |
| Structure-from-Motion | Tanks&Temples | Registration Score1 | 15 | |
| 3D Reconstruction | Replica (test) | Avg Acc3.76 | 9 | |
| Camera pose estimation | Replica 54 (full video) | Average Error6.61 | 9 | |
| Camera pose estimation | Replica | ATE RMSE (cm)6.61 | 9 | |
| Relative Pose Estimation | 7 Scenes | ATE RMSE (cm)8.41 | 7 | |
| 3D Reconstruction | Tanks and Temples (Sampled Scenes) | Accuracy (cm)6.97 | 3 | |
| 3D Reconstruction | ETH3D Sampled Scenes | Accuracy (cm)2.41 | 3 | |
| 3D Reconstruction | ScanNet (Sampled Scenes) | Surface Distance Error (cm)5.37 | 3 |