Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Regress Before Construct: Regress Autoencoder for Point Cloud Self-supervised Learning

About

Masked Autoencoders (MAE) have demonstrated promising performance in self-supervised learning for both 2D and 3D computer vision. Nevertheless, existing MAE-based methods still have certain drawbacks. Firstly, the functional decoupling between the encoder and decoder is incomplete, which limits the encoder's representation learning ability. Secondly, downstream tasks solely utilize the encoder, failing to fully leverage the knowledge acquired through the encoder-decoder architecture in the pre-text task. In this paper, we propose Point Regress AutoEncoder (Point-RAE), a new scheme for regressive autoencoders for point cloud self-supervised learning. The proposed method decouples functions between the decoder and the encoder by introducing a mask regressor, which predicts the masked patch representation from the visible patch representation encoded by the encoder and the decoder reconstructs the target from the predicted masked patch representation. By doing so, we minimize the impact of decoder updates on the representation space of the encoder. Moreover, we introduce an alignment constraint to ensure that the representations for masked patches, predicted from the encoded representations of visible patches, are aligned with the masked patch presentations computed from the encoder. To make full use of the knowledge learned in the pre-training stage, we design a new finetune mode for the proposed Point-RAE. Extensive experiments demonstrate that our approach is efficient during pre-training and generalizes well on various downstream tasks. Specifically, our pre-trained models achieve a high accuracy of \textbf{90.28\%} on the ScanObjectNN hardest split and \textbf{94.1\%} accuracy on ModelNet40, surpassing all the other self-supervised learning methods. Our code and pretrained model are public available at: \url{https://github.com/liuyyy111/Point-RAE}.

Yang Liu, Chen Chen, Can Wang, Xulin King, Mengyuan Liu• 2023

Related benchmarks

TaskDatasetResultRank
3D Object ClassificationModelNet40 (test)--
302
Shape classificationModelNet40 (test)
OA94.1
255
Few-shot classificationModelNet40 10-way 20-shot
Accuracy95.8
79
Few-shot classificationModelNet40 5-way 10-shot
Accuracy97.3
79
Few-shot classificationModelNet40 10-way 10-shot
Accuracy93.3
79
Few-shot classificationModelNet40 5-way 20-shot
Accuracy98.7
79
Shape classificationScanObjectNN PB_T50_RS
OA90.3
72
3D ClassificationScanObjectNN PB-T50-RS official
Accuracy90.28
42
3D ClassificationScanObjectNN OBJ-BG official
Accuracy95.53
13
3D ClassificationScanObjectNN OBJ-ONLY official
Accuracy93.63
13
Showing 10 of 11 rows

Other info

Code

Follow for update