NAIMA: Semantics Aware RGB Guided Depth Super-Resolution
About
Guided depth super-resolution (GDSR) is a multi-modal approach for depth map super-resolution that relies on a low-resolution depth map and a high-resolution RGB image to restore finer structural details. However, the misleading color and texture cues indicating depth discontinuities in RGB images often lead to artifacts and blurred depth boundaries in the generated depth map. We propose a solution that introduces global contextual semantic priors, generated from pretrained vision transformer token embeddings. Our approach to distilling semantic knowledge from pretrained token embeddings is motivated by their demonstrated effectiveness in related monocular depth estimation tasks. We introduce a Guided Token Attention (GTA) module, which iteratively aligns encoded RGB spatial features with depth encodings, using cross-attention for selectively injecting global semantic context extracted from different layers of a pretrained vision transformer. Additionally, we present an architecture called Neural Attention for Implicit Multi-token Alignment (NAIMA), which integrates DINOv2 with GTA blocks for a semantics-aware GDSR. Our proposed architecture, with its ability to distill semantic knowledge, achieves significant improvements over existing methods across multiple scaling factors and datasets.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Guided Depth Super-resolution | NYU V2 | RMSE (4x)1.1 | 12 | |
| Guided Depth Super-resolution | RGBDD | RMSE (x4)1.07 | 11 | |
| Guided Depth Super-resolution | Middlebury | RMSE (x4)1.03 | 11 | |
| Guided Depth Super-resolution | Lu | RMSE (x4)0.84 | 11 | |
| Guided Depth Super-resolution | TOFDSR 2024 (val) | RMSE2.09 | 9 |