Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Calibratable Disambiguation Loss for Multi-Instance Partial-Label Learning

About

Multi-instance partial-label learning (MIPL) is a weakly supervised framework that extends the principles of multi-instance learning (MIL) and partial-label learning (PLL) to address the challenges of inexact supervision in both instance and label spaces. However, existing MIPL approaches often suffer from poor calibration, undermining classifier reliability. In this work, we propose a plug-and-play calibratable disambiguation loss (CDL) that simultaneously improves classification accuracy and calibration performance. The loss has two instantiations: the first one calibrates predictions based on probabilities from the candidate label set, while the second one integrates probabilities from both candidate and non-candidate label sets. The proposed CDL can be seamlessly incorporated into existing MIPL and PLL frameworks. We provide a theoretical analysis that establishes the lower bound and regularization properties of CDL, demonstrating its superiority over conventional disambiguation losses. Experimental results on benchmark and real-world datasets confirm that our CDL significantly enhances both classification and calibration performance.

Wei Tang, Yin-Fang Yang, Weijia Zhang, Min-Ling Zhang• 2025

Related benchmarks

TaskDatasetResultRank
ClassificationC-Row
Accuracy49.06
12
ClassificationC-SBN
Accuracy57.91
12
ClassificationC-KMeans
Accuracy64.78
12
ClassificationMNIST-MIPL r=1 (test)
Accuracy99.93
12
ClassificationFMNIST-MIPL r=1 (test)
Accuracy93.2
12
ClassificationBirdsong-MIPL r=1 (test)
Accuracy80.38
12
ClassificationMNIST-MIPL r=2 (test)
Accuracy99.87
12
ClassificationSIVAL-MIPL r=2 (test)
Accuracy71.93
12
ClassificationMNIST-MIPL r=3 (test)
Accuracy98
12
ClassificationBirdsong-MIPL r=3 (test)
Accuracy77.82
12
Showing 10 of 38 rows

Other info

Follow for update