Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Binary Classification with Confidence Difference

About

Recently, learning with soft labels has been shown to achieve better performance than learning with hard labels in terms of model generalization, calibration, and robustness. However, collecting pointwise labeling confidence for all training examples can be challenging and time-consuming in real-world scenarios. This paper delves into a novel weakly supervised binary classification problem called confidence-difference (ConfDiff) classification. Instead of pointwise labeling confidence, we are given only unlabeled data pairs with confidence difference that specifies the difference in the probabilities of being positive. We propose a risk-consistent approach to tackle this problem and show that the estimation error bound achieves the optimal convergence rate. We also introduce a risk correction approach to mitigate overfitting problems, whose consistency and convergence rate are also proven. Extensive experiments on benchmark data sets and a real-world recommender system data set validate the effectiveness of our proposed approaches in exploiting the supervision information of the confidence difference.

Wei Wang, Lei Feng, Yuchen Jiang, Gang Niu, Min-Ling Zhang, Masashi Sugiyama• 2023

Related benchmarks

TaskDatasetResultRank
Image ClassificationMNIST (test)
Accuracy98.3
882
ClassificationFashion (test)
Accuracy97.5
51
Binary ClassificationCIFAR-10 (test)
Accuracy87.4
27
Binary ClassificationKuzushiji (test)
Accuracy91.5
27
ClassificationOptdigits (pi+=0.2) UCI (test)
Accuracy96.3
9
ClassificationUSPS pi+=0.2 UCI (test)
Accuracy96
9
ClassificationPendigits pi+=0.2 UCI (test)
Accuracy98.8
9
ClassificationLetter (pi+=0.2) UCI (test)
Accuracy94.2
9
ClassificationOptdigits pi+=0.5 UCI (test)
Accuracy96.2
9
ClassificationUSPS pi+=0.5 UCI (test)
Accuracy95.9
9
Showing 10 of 17 rows

Other info

Follow for update