Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Pri3D: Can 3D Priors Help 2D Representation Learning?

About

Recent advances in 3D perception have shown impressive progress in understanding geometric structures of 3Dshapes and even scenes. Inspired by these advances in geometric understanding, we aim to imbue image-based perception with representations learned under geometric constraints. We introduce an approach to learn view-invariant,geometry-aware representations for network pre-training, based on multi-view RGB-D data, that can then be effectively transferred to downstream 2D tasks. We propose to employ contrastive learning under both multi-view im-age constraints and image-geometry constraints to encode3D priors into learned 2D representations. This results not only in improvement over 2D-only representation learning on the image-based tasks of semantic segmentation, instance segmentation, and object detection on real-world in-door datasets, but moreover, provides significant improvement in the low data regime. We show a significant improvement of 6.0% on semantic segmentation on full data as well as 11.9% on 20% data against baselines on ScanNet.

Ji Hou, Saining Xie, Benjamin Graham, Angela Dai, Matthias Nie{\ss}ner• 2021

Related benchmarks

TaskDatasetResultRank
Semantic segmentationCityscapes (test)
mIoU55.1
1145
Semantic segmentationCityscapes
mIoU56.3
578
Semantic segmentationNYU V2
mIoU54.2
74
Semantic segmentationScanNet
mIoU61.7
59
Instance SegmentationScanNetV2 (val)
mAP@0.534.3
58
Object DetectionNYUD v2 (test)
Mean AP (b)18.9
24
Semantic segmentationNYU V2
mIoU54.8
14
2D Semantic SegmentationScanNet (val)
mIoU60.2
10
2D Instance SegmentationNYU V2
AP@0.528.1
6
Object DetectionScanNet 2D (val)
AP@0.543.7
6
Showing 10 of 10 rows

Other info

Follow for update