Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Learning from 2D: Contrastive Pixel-to-Point Knowledge Transfer for 3D Pretraining

About

Most 3D neural networks are trained from scratch owing to the lack of large-scale labeled 3D datasets. In this paper, we present a novel 3D pretraining method by leveraging 2D networks learned from rich 2D datasets. We propose the contrastive pixel-to-point knowledge transfer to effectively utilize the 2D information by mapping the pixel-level and point-level features into the same embedding space. Due to the heterogeneous nature between 2D and 3D networks, we introduce the back-projection function to align the features between 2D and 3D to make the transfer possible. Additionally, we devise an upsampling feature projection layer to increase the spatial resolution of high-level 2D feature maps, which enables learning fine-grained 3D representations. With a pretrained 2D network, the proposed pretraining process requires no additional 2D or 3D labeled data, further alleviating the expensive 3D data annotation cost. To the best of our knowledge, we are the first to exploit existing 2D trained weights to pretrain 3D deep neural networks. Our intensive experiments show that the 3D models pretrained with 2D knowledge boost the performances of 3D networks across various real-world 3D downstream tasks.

Yueh-Cheng Liu, Yu-Kai Huang, Hung-Yueh Chiang, Hung-Ting Su, Zhe-Yu Liu, Chin-Tang Chen, Ching-Yu Tseng, Winston H. Hsu• 2021

Related benchmarks

TaskDatasetResultRank
3D Semantic SegmentationScanNet (val)
mIoU64.2
100
3D Semantic SegmentationSemanticKITTI (val)
mIoU53.1
54
Object DetectionnuScenes (val)
mAP48.8
41
3D Semantic SegmentationnuScenes (val)
mIoU70.1
37
3D Dense CaptioningScanRefer Oracle DC
CIDEr77.82
7
3D Object DetectionnuScenes 20% labeled frames v1.0 (trainval)
NDS49.2
6
Showing 6 of 6 rows

Other info

Follow for update