PAGCNet: A Pose-Aware and Geometry Constrained Framework for Panoramic Depth Estimation
About
Explicitly modeling room background depth as a geometric constraint has proven effective for panoramic depth estimation. However, reconstructing this background depth for regular enclosed regions in a complex indoor scene without external measurements remains an open challenge. To address this, we propose a pose-aware and geometry-constrained framework for panoramic depth estimation. Our framework first employs multiple task-specific decoders to jointly estimate room layout, camera pose, depth, and region segmentation from a input panoramic image. A pose-aware background depth resolving (PA-BDR) component uses tasks decoder's prediction to resolve the camera pose. Subsequently, the proposed PA-BDR component uses the camera pose to compute the background depth of regular enclosed regions and uses this background depth as a strong geometric prior. Based on the output of the region segmentation decoder, a fusion mask generation (FMG) component produces a fusion weight map to guide where and to what extent the geometry-constrained background depth should correct the depth decoder's prediction. Finally, an adaptive fusion component integrates this refined background depth with the initial depth prediction, guided by the fusion weight. Extensive experiments on Matterport3D, Structured3D, and Replica datasets demonstrate that our method achieves significantly superior performance compared to current open-source methods. Code is available at https://github.com/emiyaning/PAGCNet.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Depth Estimation | Matterport3D | delta193.99 | 35 | |
| Depth Estimation | Structured3D (val) | δ1 Accuracy96.79 | 9 |