Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Asymmetric Loss For Multi-Label Classification

About

In a typical multi-label setting, a picture contains on average few positive labels, and many negative ones. This positive-negative imbalance dominates the optimization process, and can lead to under-emphasizing gradients from positive labels during training, resulting in poor accuracy. In this paper, we introduce a novel asymmetric loss ("ASL"), which operates differently on positive and negative samples. The loss enables to dynamically down-weights and hard-thresholds easy negative samples, while also discarding possibly mislabeled samples. We demonstrate how ASL can balance the probabilities of different samples, and how this balancing is translated to better mAP scores. With ASL, we reach state-of-the-art results on multiple popular multi-label datasets: MS-COCO, Pascal-VOC, NUS-WIDE and Open Images. We also demonstrate ASL applicability for other tasks, such as single-label classification and object detection. ASL is effective, easy to implement, and does not increase the training time or complexity. Implementation is available at: https://github.com/Alibaba-MIIL/ASL.

Emanuel Ben-Baruch, Tal Ridnik, Nadav Zamir, Asaf Noy, Itamar Friedman, Matan Protter, Lihi Zelnik-Manor• 2020

Related benchmarks

TaskDatasetResultRank
Multi-Label ClassificationPASCAL VOC 2007 (test)
mAP95.8
125
Multi-Label ClassificationNUS-WIDE (test)
mAP65.2
112
Multi-label Scene ClassificationUCMerced
mAP (macro)88.88
105
Multi-label Scene ClassificationAID-ML (test)
mAP (macro)66.21
105
Multi-Label ClassificationMS-COCO 2014 (test)
mAP86.6
81
Multi-label image recognitionVOC 2007 (test)
mAP95.8
61
Multi-label image recognitionMS-COCO 2014 (val)
mAP88.4
51
Multi-Label ClassificationMS-COCO (val)
mAP86.6
47
Multi-Label ClassificationUCMerced
mAP (macro)88.51
35
Multi-Label ClassificationCOCO 2014 (test)
mAP66.6
31
Showing 10 of 40 rows

Other info

Code

Follow for update