GlORIE-SLAM: Globally Optimized RGB-only Implicit Encoding Point Cloud SLAM
About
Recent advancements in RGB-only dense Simultaneous Localization and Mapping (SLAM) have predominantly utilized grid-based neural implicit encodings and/or struggle to efficiently realize global map and pose consistency. To this end, we propose an efficient RGB-only dense SLAM system using a flexible neural point cloud scene representation that adapts to keyframe poses and depth updates, without needing costly backpropagation. Another critical challenge of RGB-only SLAM is the lack of geometric priors. To alleviate this issue, with the aid of a monocular depth estimator, we introduce a novel DSPO layer for bundle adjustment which optimizes the pose and depth of keyframes along with the scale of the monocular depth. Finally, our system benefits from loop closure and online global bundle adjustment and performs either better or competitive to existing dense neural RGB SLAM methods in tracking, mapping and rendering accuracy on the Replica, TUM-RGBD and ScanNet datasets. The source code is available at https://github.com/zhangganlin/GlOIRE-SLAM
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Appearance reconstruction | Waymo 8 scenes | PSNR27.35 | 54 | |
| Visual Odometry | TUM-RGBD | freiburg1/desk2 Error8.6 | 37 | |
| Monocular Visual Odometry | VIVID Mean over sequences | ATE RMSE0.37 | 20 | |
| Monocular Visual Odometry | VIVID in_rob_local | ATE RMSE0.06 | 18 | |
| Novel View Synthesis | SeaThru-NeRF Panama | PSNR18.79 | 18 | |
| Novel View Synthesis | SeaThru-NeRF (J.G.-RedSea) | PSNR16.11 | 18 | |
| Monocular Visual Odometry | VIVID in_rob_global | ATE RMSE0.08 | 17 | |
| Monocular Visual Odometry | VIVID in_unst_local | ATE RMSE0.04 | 17 | |
| Novel View Synthesis | SeaThru-NeRF Curasao | PSNR21.79 | 17 | |
| Monocular Visual Odometry | VIVID in_rob_dark | ATE RMSE0.07 | 16 |