SLAM3R: Real-Time Dense Scene Reconstruction from Monocular RGB Videos
About
In this paper, we introduce SLAM3R, a novel and effective system for real-time, high-quality, dense 3D reconstruction using RGB videos. SLAM3R provides an end-to-end solution by seamlessly integrating local 3D reconstruction and global coordinate registration through feed-forward neural networks. Given an input video, the system first converts it into overlapping clips using a sliding window mechanism. Unlike traditional pose optimization-based methods, SLAM3R directly regresses 3D pointmaps from RGB images in each window and progressively aligns and deforms these local pointmaps to create a globally consistent scene reconstruction - all without explicitly solving any camera parameters. Experiments across datasets consistently show that SLAM3R achieves state-of-the-art reconstruction accuracy and completeness while maintaining real-time performance at 20+ FPS. Code available at: https://github.com/PKU-VCL-3DV/SLAM3R.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Pose Estimation | ETH3D | AUC @ Threshold 30.0146 | 41 | |
| 3D Scene Reconstruction | 7-Scenes (test) | Accuracy2.4 | 27 | |
| Camera pose estimation | Oxford Spires sparse setting | AUC@151.67 | 18 | |
| Camera pose estimation | 7Scenes (test) | Chess Error6.2 | 16 | |
| Structure-from-Motion | Tanks&Temples | Registration Score1 | 15 | |
| Camera pose estimation | Replica | ATE RMSE (cm)6.61 | 15 | |
| Relative Pose Estimation | 7 Scenes | -- | 12 | |
| 3D Reconstruction | Replica (test) | Avg Acc3.76 | 9 | |
| Pose and trajectory estimation | 7 Scenes | AUC34.79 | 9 | |
| Camera pose estimation | Replica 54 (full video) | Average Error6.61 | 9 |