Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Leveraging Label Proportion Prior for Class-Imbalanced Semi-Supervised Learning

About

Semi-supervised learning (SSL) often suffers under class imbalance, where pseudo-labeling amplifies majority bias and suppresses minority performance. We address this issue with a lightweight framework that, to our knowledge, is the first to introduce Proportion Loss from learning from label proportions (LLP) into SSL as a regularization term. Proportion Loss aligns model predictions with the global class distribution, mitigating bias across both majority and minority classes. To further stabilize training, we formulate a stochastic variant that accounts for fluctuations in mini-batch composition. Experiments on the Long-tailed CIFAR-10 benchmark show that integrating Proportion Loss into FixMatch and ReMixMatch consistently improves performance over the baselines across imbalance severities and label ratios, and achieves competitive or superior results compared to existing CISSL methods, particularly under scarce-label conditions.

Kohki Akiba, Shinnosuke Matsuo, Shota Harada, Ryoma Bise• 2026

Related benchmarks

TaskDatasetResultRank
Long-Tailed Image ClassificationCIFAR-10 LT gamma=100, beta=20% (test)
Overall Accuracy77.1
21
Image ClassificationCIFAR-10 LT (gamma=10, beta=2%) (test)
Accuracy88.1
8
Image ClassificationCIFAR-10 gamma=20, beta=4% long-tailed variant (test)
Accuracy (%)85.6
8
Image ClassificationCIFAR-10-LT (gamma=50, beta=10%) (test)
Accuracy81.2
8
Showing 4 of 4 rows

Other info

Follow for update