Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Learning 3D Representations from 2D Pre-trained Models via Image-to-Point Masked Autoencoders

About

Pre-training by numerous image data has become de-facto for robust 2D representations. In contrast, due to the expensive data acquisition and annotation, a paucity of large-scale 3D datasets severely hinders the learning for high-quality 3D features. In this paper, we propose an alternative to obtain superior 3D representations from 2D pre-trained models via Image-to-Point Masked Autoencoders, named as I2P-MAE. By self-supervised pre-training, we leverage the well learned 2D knowledge to guide 3D masked autoencoding, which reconstructs the masked point tokens with an encoder-decoder architecture. Specifically, we first utilize off-the-shelf 2D models to extract the multi-view visual features of the input point cloud, and then conduct two types of image-to-point learning schemes on top. For one, we introduce a 2D-guided masking strategy that maintains semantically important point tokens to be visible for the encoder. Compared to random masking, the network can better concentrate on significant 3D structures and recover the masked tokens from key spatial cues. For another, we enforce these visible tokens to reconstruct the corresponding multi-view 2D features after the decoder. This enables the network to effectively inherit high-level 2D semantics learned from rich image data for discriminative 3D modeling. Aided by our image-to-point pre-training, the frozen I2P-MAE, without any fine-tuning, achieves 93.4% accuracy for linear SVM on ModelNet40, competitive to the fully trained results of existing methods. By further fine-tuning on on ScanObjectNN's hardest split, I2P-MAE attains the state-of-the-art 90.11% accuracy, +3.68% to the second-best, demonstrating superior transferable capacity. Code will be available at https://github.com/ZrrSkywalker/I2P-MAE.

Renrui Zhang, Liuhui Wang, Yu Qiao, Peng Gao, Hongsheng Li• 2022

Related benchmarks

TaskDatasetResultRank
3D Object ClassificationModelNet40 (test)
Accuracy93.4
302
3D Point Cloud ClassificationModelNet40 (test)
OA94.1
297
Object ClassificationScanObjectNN OBJ_BG
Accuracy94.15
215
Object ClassificationScanObjectNN PB_T50_RS
Accuracy90.11
195
Object ClassificationScanObjectNN OBJ_ONLY
Overall Accuracy91.57
166
Shape Part SegmentationShapeNet (test)
Mean IoU22.6
95
Few-shot classificationModelNet40 5-way 10-shot
Accuracy97
79
Few-shot classificationModelNet40 10-way 20-shot
Accuracy95.5
79
Few-shot classificationModelNet40 5-way 20-shot
Accuracy98.3
79
Few-shot classificationModelNet40 10-way 10-shot
Accuracy92.6
79
Showing 10 of 67 rows

Other info

Code

Follow for update