Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SHED Light on Segmentation for Dense Prediction

About

Dense prediction infers per-pixel values from a single image and is fundamental to 3D perception and robotics. Although real-world scenes exhibit strong structure, existing methods treat it as an independent pixel-wise prediction, often resulting in structural inconsistencies. We propose SHED, a novel encoder-decoder architecture that enforces geometric prior explicitly by incorporating segmentation into dense prediction. By bidirectional hierarchical reasoning, segment tokens are hierarchically pooled in the encoder and unpooled in the decoder to reverse the hierarchy. The model is supervised only at the final output, allowing the segment hierarchy to emerge without explicit segmentation supervision. SHED improves depth boundary sharpness and segment coherence, while demonstrating strong cross-domain generalization from synthetic to the real-world environments. Its hierarchy-aware decoder better captures global 3D scene layouts, leading to improved semantic segmentation performance. Moreover, SHED enhances 3D reconstruction quality and reveals interpretable part-level structures that are often missed by conventional pixel-wise methods.

Seung Hyun Lee, Sangwoo Mo, Stella X. Yu• 2026

Related benchmarks

TaskDatasetResultRank
Semantic segmentationADE20K
mIoU44.5
936
Monocular Depth EstimationNYU v2 (test)
Abs Rel0.123
257
Depth EstimationNYU Depth V2--
177
Depth EstimationKITTI
AbsRel0.272
92
Depth EstimationSUN-RGBD
AbsRel7.16
2
Showing 5 of 5 rows

Other info

Follow for update