M2m: Imbalanced Classification via Major-to-minor Translation
About
In most real-world scenarios, labeled training datasets are highly class-imbalanced, where deep neural networks suffer from generalizing to a balanced testing criterion. In this paper, we explore a novel yet simple way to alleviate this issue by augmenting less-frequent classes via translating samples (e.g., images) from more-frequent classes. This simple approach enables a classifier to learn more generalizable features of minority classes, by transferring and leveraging the diversity of the majority information. Our experimental results on a variety of class-imbalanced datasets show that the proposed method improves the generalization on minority classes significantly compared to other existing re-sampling or re-weighting methods. The performance of our method even surpasses those of previous state-of-the-art methods for the imbalanced classification.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Image Classification | CIFAR-10 long-tailed (test) | Top-1 Acc87.5 | 201 | |
| Image Classification | CIFAR100 long-tailed (test) | Accuracy57.6 | 155 | |
| Image Classification | CIFAR-100 Long-Tailed (test) | Top-1 Accuracy57.6 | 149 | |
| Image Classification | CIFAR-100-LT IF 100 (test) | Top-1 Acc43.5 | 77 | |
| Image Classification | CIFAR-100 LT (val) | -- | 69 | |
| Image Classification | CIFAR10 LT (test) | -- | 68 | |
| Image Classification | CIFAR-10-LT (val) | -- | 65 | |
| Image Classification | CIFAR-100-LT Imbalance Ratio 100 (test) | Accuracy42.9 | 62 | |
| Image Classification | CIFAR-100 LT Imbalance Ratio 10 (test) | Accuracy58.2 | 59 | |
| Image Classification | CIFAR-10 Long Tailed Imbalance Ratio 50 (test) | Top-1 Accuracy85.5 | 57 |