Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Beyond Myopia: Learning from Positive and Unlabeled Data through Holistic Predictive Trends

About

Learning binary classifiers from positive and unlabeled data (PUL) is vital in many real-world applications, especially when verifying negative examples is difficult. Despite the impressive empirical performance of recent PUL methods, challenges like accumulated errors and increased estimation bias persist due to the absence of negative labels. In this paper, we unveil an intriguing yet long-overlooked observation in PUL: \textit{resampling the positive data in each training iteration to ensure a balanced distribution between positive and unlabeled examples results in strong early-stage performance. Furthermore, predictive trends for positive and negative classes display distinctly different patterns.} Specifically, the scores (output probability) of unlabeled negative examples consistently decrease, while those of unlabeled positive examples show largely chaotic trends. Instead of focusing on classification within individual time frames, we innovatively adopt a holistic approach, interpreting the scores of each example as a temporal point process (TPP). This reformulates the core problem of PUL as recognizing trends in these scores. We then propose a novel TPP-inspired measure for trend detection and prove its asymptotic unbiasedness in predicting changes. Notably, our method accomplishes PUL without requiring additional parameter tuning or prior assumptions, offering an alternative perspective for tackling this problem. Extensive experiments verify the superiority of our method, particularly in a highly imbalanced real-world setting, where it achieves improvements of up to $11.3\%$ in key metrics. The code is available at \href{https://github.com/wxr99/HolisticPU}{https://github.com/wxr99/HolisticPU}.

Xinrui Wang, Wenhai Wan, Chuanxin Geng, Shaoyuan LI, Songcan Chen• 2023

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-10 (test)
Accuracy91.1
3381
Image ClassificationSTL-10 (test)
Accuracy85.3
357
Image ClassificationF-MNIST (test)
Accuracy96
64
Fraud DetectionCredit Card Fraud dataset
F1 Score0.991
12
Positive-Unlabeled ClassificationAlzheimer dataset
F1 Score74.5
11
ClassificationF-MNIST unlabeled 1 (train)
Accuracy95.41
5
ClassificationF-MNIST unlabeled 2 (train)
Accuracy96
5
ClassificationCIFAR10 unlabeled 1 (train)
Accuracy91.42
5
ClassificationCIFAR10 unlabeled 2 (train)
Accuracy91.17
5
ClassificationCredit Card unlabeled (train)
Recall0.989
5
Showing 10 of 19 rows

Other info

Code

Follow for update