Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

PE3R: Perception-Efficient 3D Reconstruction

About

Recent advances in 2D-to-3D perception have enabled the recovery of 3D scene semantics from unposed images. However, prevailing methods often suffer from limited generalization, reliance on per-scene optimization, and semantic inconsistencies across viewpoints. To address these limitations, we introduce PE3R, a tuning-free framework for efficient and generalizable 3D semantic reconstruction. By integrating multi-view geometry with 2D semantic priors in a feed-forward pipeline, PE3R achieves zero-shot generalization across diverse scenes and object categories without any scene-specific fine-tuning. Extensive evaluations on open-vocabulary segmentation and multi-view depth estimation show that PE3R not only achieves up to 9$\times$ faster inference but also sets new state-of-the-art accuracy in both semantic and geometric metrics. Our approach paves the way for scalable, language-driven 3D scene understanding. Code is available at github.com/hujiecpp/PE3R.

Jie Hu, Shizun Wang, Xinchao Wang• 2025

Related benchmarks

TaskDatasetResultRank
Multi-view Depth EstimationETH3D
Relative Error (rel)1.7
21
Instance SegmentationScanNet
mAP@0.532.6
20
Multi-view Depth EstimationScanNet
Relative Error (rel)4.3
13
Multi-view Depth EstimationTanks and Temples (T&T)
Relative Error (rel)2
13
Multi-view Depth EstimationDTU
Relative Error (rel)1.1
13
3D Semantic SegmentationScanNet 3 (val)
mIoU10.7
11
Novel View SynthesisSelected reconstruction scenes
PSNR14.68
10
3D Semantic SegmentationScanNet200 42 (val)
mIoU2.5
9
2D-to-3D Open-Vocabulary SegmentationMip-NeRF360
mIoU89.51
8
2D-to-3D Open-Vocabulary SegmentationReplica
mIoU65.31
8
Showing 10 of 15 rows

Other info

Follow for update