Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Positive Unlabeled Contrastive Learning

About

Self-supervised pretraining on unlabeled data followed by supervised fine-tuning on labeled data is a popular paradigm for learning from limited labeled examples. We extend this paradigm to the classical positive unlabeled (PU) setting, where the task is to learn a binary classifier given only a few labeled positive samples, and (often) a large amount of unlabeled samples (which could be positive or negative). We first propose a simple extension of standard infoNCE family of contrastive losses, to the PU setting; and show that this learns superior representations, as compared to existing unsupervised and supervised approaches. We then develop a simple methodology to pseudo-label the unlabeled samples using a new PU-specific clustering scheme; these pseudo-labels can then be used to train the final (positive vs. negative) classifier. Our method handily outperforms state-of-the-art PU methods over several standard PU benchmark datasets, while not requiring a-priori knowledge of any class prior (which is a common assumption in other PU methods). We also provide a simple theoretical analysis that motivates our methods.

Anish Acharya, Sujay Sanghavi, Li Jing, Bhargav Bhushanam, Dhruv Choudhary, Michael Rabbat, Inderjit Dhillon• 2022

Related benchmarks

TaskDatasetResultRank
Positive-Unlabeled ClassificationCIFAR-10 (test)
Accuracy95.32
19
Positive-Unlabeled LearningSVHN (test)
Accuracy0.9534
15
Positive-Unlabeled LearningSTL-10 (test)
Accuracy95.13
14
Showing 3 of 3 rows

Other info

Follow for update