PCP-MAE: Learning to Predict Centers for Point Masked Autoencoders
About
Masked autoencoder has been widely explored in point cloud self-supervised learning, whereby the point cloud is generally divided into visible and masked parts. These methods typically include an encoder accepting visible patches (normalized) and corresponding patch centers (position) as input, with the decoder accepting the output of the encoder and the centers (position) of the masked parts to reconstruct each point in the masked patches. Then, the pre-trained encoders are used for downstream tasks. In this paper, we show a motivating empirical result that when directly feeding the centers of masked patches to the decoder without information from the encoder, it still reconstructs well. In other words, the centers of patches are important and the reconstruction objective does not necessarily rely on representations of the encoder, thus preventing the encoder from learning semantic representations. Based on this key observation, we propose a simple yet effective method, i.e., learning to Predict Centers for Point Masked AutoEncoders (PCP-MAE) which guides the model to learn to predict the significant centers and use the predicted centers to replace the directly provided centers. Specifically, we propose a Predicting Center Module (PCM) that shares parameters with the original encoder with extra cross-attention to predict centers. Our method is of high pre-training efficiency compared to other alternatives and achieves great improvement over Point-MAE, particularly surpassing it by 5.50% on OBJ-BG, 6.03% on OBJ-ONLY, and 5.17% on PB-T50-RS for 3D object classification on the ScanObjectNN dataset. The code is available at https://github.com/aHapBean/PCP-MAE.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Semantic segmentation | S3DIS (Area 5) | mIOU61.3 | 799 | |
| Part Segmentation | ShapeNetPart (test) | mIoU (Inst.)86.1 | 312 | |
| Few-shot classification | ModelNet40 5-way 20-shot | Accuracy99.1 | 79 | |
| Few-shot classification | ModelNet40 5-way 10-shot | Accuracy97.4 | 79 | |
| Few-shot classification | ModelNet40 10-way 20-shot | Accuracy95.9 | 79 | |
| Few-shot classification | ModelNet40 10-way 10-shot | Accuracy93.5 | 79 | |
| 3D Object Classification | ModelNet40 | Accuracy0.94 | 62 | |
| Object Classification | ScanObjectNN OBJ_BG v1.0 | Accuracy95.52 | 29 | |
| Object Classification | ScanObjectNN OBJ_ONLY v1.0 | Accuracy94.32 | 29 | |
| 3D Object Classification | ModelNet40 v1.0 (test) | Accuracy94.2 | 27 |