Unsupervised Point Cloud Pre-Training via Contrasting and Clustering
About
Annotating large-scale point clouds is highly time-consuming and often infeasible for many complex real-world tasks. Point cloud pre-training has therefore become a promising strategy for learning discriminative representations without labeled data. In this paper, we propose a general unsupervised pre-training framework, termed ConClu, which jointly integrates contrasting and clustering. The contrasting objective maximizes the similarity between feature representations extracted from two augmented views of the same point cloud, while the clustering objective simultaneously partitions the data and enforces consistency between cluster assignments across augmentations. Experimental results on multiple downstream tasks show that our method outperforms state-of-the-art approaches, demonstrating the effectiveness of the proposed framework. Code is available at https://github.com/gfmei/conclu.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Part Segmentation | ShapeNetPart | -- | 246 | |
| Object Classification | ModelNet10 (test) | Accuracy95 | 60 | |
| Object Classification | ModelNet40 1.0 (test) | Accuracy91.6 | 19 |