Elite360D: Towards Efficient 360 Depth Estimation via Semantic- and Distance-Aware Bi-Projection Fusion
About
360 depth estimation has recently received great attention for 3D reconstruction owing to its omnidirectional field of view (FoV). Recent approaches are predominantly focused on cross-projection fusion with geometry-based re-projection: they fuse 360 images with equirectangular projection (ERP) and another projection type, e.g., cubemap projection to estimate depth with the ERP format. However, these methods suffer from 1) limited local receptive fields, making it hardly possible to capture large FoV scenes, and 2) prohibitive computational cost, caused by the complex cross-projection fusion module design. In this paper, we propose Elite360D, a novel framework that inputs the ERP image and icosahedron projection (ICOSAP) point set, which is undistorted and spatially continuous. Elite360D is superior in its capacity in learning a representation from a local-with-global perspective. With a flexible ERP image encoder, it includes an ICOSAP point encoder, and a Bi-projection Bi-attention Fusion (B2F) module (totally ~1M parameters). Specifically, the ERP image encoder can take various perspective image-trained backbones (e.g., ResNet, Transformer) to extract local features. The point encoder extracts the global features from the ICOSAP. Then, the B2F module captures the semantic- and distance-aware dependencies between each pixel of the ERP feature and the entire ICOSAP feature set. Without specific backbone design and obvious computational cost increase, Elite360D outperforms the prior arts on several benchmark datasets.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Monocular Depth Estimation | Stanford2D3D (test) | δ1 Accuracy88.72 | 71 | |
| Monocular Depth Estimation | Matterport3D (test) | Delta Acc (< 1.25)88.15 | 48 | |
| Monocular 360 Depth Estimation | Matterport3D official (test) | Delta Acc (1.25x)89.9 | 20 | |
| Depth Estimation | Structure3D (test) | AbsRel0.148 | 18 | |
| Depth Estimation | Stanford2D3D sphere rank 7 256x512 (test) | MAE0.169 | 7 | |
| Semantic segmentation | Stanford2D3D sphere rank 7 256x512 (test) | Accuracy87.4 | 7 | |
| Depth Estimation | Structured3D sphere rank 7 256x512 (test) | MAE0.147 | 5 | |
| Semantic segmentation | Structured3D sphere rank 7 256x512 (test) | Accuracy95.3 | 5 | |
| Depth Estimation | Stanford2D3D 512x1024 (rank 8) | MAE0.181 | 3 |