Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Efficient RGB-D Scene Understanding via Multi-task Adaptive Learning and Cross-dimensional Feature Guidance

About

Scene understanding plays a critical role in enabling intelligence and autonomy in robotic systems. Traditional approaches often face challenges, including occlusions, ambiguous boundaries, and the inability to adapt attention based on task-specific requirements and sample variations. To address these limitations, this paper presents an efficient RGB-D scene understanding model that performs a range of tasks, including semantic segmentation, instance segmentation, orientation estimation, panoptic segmentation, and scene classification. The proposed model incorporates an enhanced fusion encoder, which effectively leverages redundant information from both RGB and depth inputs. For semantic segmentation, we introduce normalized focus channel layers and a context feature interaction layer, designed to mitigate issues such as shallow feature misguidance and insufficient local-global feature representation. The instance segmentation task benefits from a non-bottleneck 1D structure, which achieves superior contour representation with fewer parameters. Additionally, we propose a multi-task adaptive loss function that dynamically adjusts the learning strategy for different tasks based on scene variations. Extensive experiments on the NYUv2, SUN RGB-D, and Cityscapes datasets demonstrate that our approach outperforms existing methods in both segmentation accuracy and processing speed.

Guodong Sun, Junjie Liu, Gaoyang Zhang, Bo Wu, Yang Zhang• 2026

Related benchmarks

TaskDatasetResultRank
Semantic segmentationCityscapes
mIoU65.11
658
Semantic segmentationNYU v2 (test)
mIoU49.82
282
Semantic segmentationSUN RGB-D
mIoU45.56
65
Instance SegmentationNYU V2
Instance PQ59.9
5
Semantic segmentationNYU V2
Semantic mIoU49.82
5
Showing 5 of 5 rows

Other info

Follow for update