Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Label-Imbalanced and Group-Sensitive Classification under Overparameterization

About

The goal in label-imbalanced and group-sensitive classification is to optimize relevant metrics such as balanced error and equal opportunity. Classical methods, such as weighted cross-entropy, fail when training deep nets to the terminal phase of training (TPT), that is training beyond zero training error. This observation has motivated recent flurry of activity in developing heuristic alternatives following the intuitive mechanism of promoting larger margin for minorities. In contrast to previous heuristics, we follow a principled analysis explaining how different loss adjustments affect margins. First, we prove that for all linear classifiers trained in TPT, it is necessary to introduce multiplicative, rather than additive, logit adjustments so that the interclass margins change appropriately. To show this, we discover a connection of the multiplicative CE modification to the cost-sensitive support-vector machines. Perhaps counterintuitively, we also find that, at the start of training, the same multiplicative weights can actually harm the minority classes. Thus, while additive adjustments are ineffective in the TPT, we show that they can speed up convergence by countering the initial negative effect of the multiplicative weights. Motivated by these findings, we formulate the vector-scaling (VS) loss, that captures existing techniques as special cases. Moreover, we introduce a natural extension of the VS-loss to group-sensitive classification, thus treating the two common types of imbalances (label/group) in a unifying way. Importantly, our experiments on state-of-the-art datasets are fully consistent with our theoretical insights and confirm the superior performance of our algorithms. Finally, for imbalanced Gaussian-mixtures data, we perform a generalization analysis, revealing tradeoffs between balanced / standard error and equal opportunity.

Ganesh Ramachandra Kini, Orestis Paraskevas, Samet Oymak, Christos Thrampoulidis• 2021

Related benchmarks

TaskDatasetResultRank
Long-Tailed Image ClassificationCIFAR100-LT Imbalance Ratio 10
Accuracy85.1
32
Long-tail Image ClassificationCIFAR100-LT imbalance ratio 100 (test)
Accuracy79.1
32
Image ClassificationImageNet LT
Top-1 Acc (Forward-LT, IR=50)60.27
23
Image ClassificationCIFAR-100 long-tailed (rho=100) (test)
Accuracy41.7
22
Image ClassificationCIFAR-10 rho=100 long-tailed (test)
Accuracy78.6
20
Image ClassificationCIFAR-10-LT Ratio 100 (test)
Balanced Accuracy0.816
17
Image ClassificationCIFAR-10-LT Ratio 10 (test)
Balanced Accuracy89.1
14
Image ClassificationCIFAR-100-LT Ratio 100 (test)
Balanced Accuracy46.3
14
Test-Agnostic Long-tail RecognitionCIFAR-100-LT SADE Setting (test)
Accuracy Forward-LT (100)58.8
12
Long-tail recognitionCIFAR-10-LT Backward-LT
Accuracy (Metric 1)82.1
9
Showing 10 of 21 rows

Other info

Code

Follow for update