Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

NAIMA: Semantics Aware RGB Guided Depth Super-Resolution

About

Guided depth super-resolution (GDSR) is a multi-modal approach for depth map super-resolution that relies on a low-resolution depth map and a high-resolution RGB image to restore finer structural details. However, the misleading color and texture cues indicating depth discontinuities in RGB images often lead to artifacts and blurred depth boundaries in the generated depth map. We propose a solution that introduces global contextual semantic priors, generated from pretrained vision transformer token embeddings. Our approach to distilling semantic knowledge from pretrained token embeddings is motivated by their demonstrated effectiveness in related monocular depth estimation tasks. We introduce a Guided Token Attention (GTA) module, which iteratively aligns encoded RGB spatial features with depth encodings, using cross-attention for selectively injecting global semantic context extracted from different layers of a pretrained vision transformer. Additionally, we present an architecture called Neural Attention for Implicit Multi-token Alignment (NAIMA), which integrates DINOv2 with GTA blocks for a semantics-aware GDSR. Our proposed architecture, with its ability to distill semantic knowledge, achieves significant improvements over existing methods across multiple scaling factors and datasets.

Tayyab Nasir, Daochang Liu, Ajmal Mian• 2026

Related benchmarks

TaskDatasetResultRank
Guided Depth Super-resolutionNYU V2
RMSE (4x)1.1
12
Guided Depth Super-resolutionRGBDD
RMSE (x4)1.07
11
Guided Depth Super-resolutionMiddlebury
RMSE (x4)1.03
11
Guided Depth Super-resolutionLu
RMSE (x4)0.84
11
Guided Depth Super-resolutionTOFDSR 2024 (val)
RMSE2.09
9
Showing 5 of 5 rows

Other info

Follow for update