Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Mask3D: Pre-training 2D Vision Transformers by Learning Masked 3D Priors

About

Current popular backbones in computer vision, such as Vision Transformers (ViT) and ResNets are trained to perceive the world from 2D images. However, to more effectively understand 3D structural priors in 2D backbones, we propose Mask3D to leverage existing large-scale RGB-D data in a self-supervised pre-training to embed these 3D priors into 2D learned feature representations. In contrast to traditional 3D contrastive learning paradigms requiring 3D reconstructions or multi-view correspondences, our approach is simple: we formulate a pre-text reconstruction task by masking RGB and depth patches in individual RGB-D frames. We demonstrate the Mask3D is particularly effective in embedding 3D priors into the powerful 2D ViT backbone, enabling improved representation learning for various scene understanding tasks, such as semantic segmentation, instance segmentation and object detection. Experiments show that Mask3D notably outperforms existing self-supervised 3D pre-training approaches on ScanNet, NYUv2, and Cityscapes image understanding tasks, with an improvement of +6.5% mIoU against the state-of-the-art Pri3D on ScanNet image semantic segmentation.

Ji Hou, Xiaoliang Dai, Zijian He, Angela Dai, Matthias Nie{\ss}ner• 2023

Related benchmarks

TaskDatasetResultRank
Semantic segmentationADE20K (val)
mIoU47.7
2731
Semantic segmentationCityscapes (test)
mIoU66.4
1145
Semantic segmentationNYU V2
mIoU56.9
74
Instance SegmentationScanNetV2 (val)
mAP@0.541.2
58
Object DetectionNYUD v2 (test)
Mean AP (b)25.9
24
2D Semantic SegmentationScanNet (val)
mIoU66.7
10
Depth EstimationDV
Delta 1 Accuracy70.9
8
Depth EstimationSCARED
Delta 161.8
8
SegmentationCholecInst
mIoU67.5
8
SegmentationEndoVis 18
mIoU40.6
8
Showing 10 of 16 rows

Other info

Code

Follow for update