PE3R: Perception-Efficient 3D Reconstruction
About
Recent advances in 2D-to-3D perception have enabled the recovery of 3D scene semantics from unposed images. However, prevailing methods often suffer from limited generalization, reliance on per-scene optimization, and semantic inconsistencies across viewpoints. To address these limitations, we introduce PE3R, a tuning-free framework for efficient and generalizable 3D semantic reconstruction. By integrating multi-view geometry with 2D semantic priors in a feed-forward pipeline, PE3R achieves zero-shot generalization across diverse scenes and object categories without any scene-specific fine-tuning. Extensive evaluations on open-vocabulary segmentation and multi-view depth estimation show that PE3R not only achieves up to 9$\times$ faster inference but also sets new state-of-the-art accuracy in both semantic and geometric metrics. Our approach paves the way for scalable, language-driven 3D scene understanding. Code is available at github.com/hujiecpp/PE3R.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Multi-view Depth Estimation | ETH3D | Relative Error (rel)1.7 | 21 | |
| Instance Segmentation | ScanNet | mAP@0.532.6 | 20 | |
| Multi-view Depth Estimation | ScanNet | Relative Error (rel)4.3 | 13 | |
| Multi-view Depth Estimation | Tanks and Temples (T&T) | Relative Error (rel)2 | 13 | |
| Multi-view Depth Estimation | DTU | Relative Error (rel)1.1 | 13 | |
| 3D Semantic Segmentation | ScanNet 3 (val) | mIoU10.7 | 11 | |
| Novel View Synthesis | Selected reconstruction scenes | PSNR14.68 | 10 | |
| 3D Semantic Segmentation | ScanNet200 42 (val) | mIoU2.5 | 9 | |
| 2D-to-3D Open-Vocabulary Segmentation | Mip-NeRF360 | mIoU89.51 | 8 | |
| 2D-to-3D Open-Vocabulary Segmentation | Replica | mIoU65.31 | 8 |