Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Derivative Manipulation for General Example Weighting

About

Real-world large-scale datasets usually contain noisy labels and are imbalanced. Therefore, we propose derivative manipulation (DM), a novel and general example weighting approach for training robust deep models under these adverse conditions. DM has two main merits. First, loss function and example weighting are common techniques in the literature. DM reveals their connection (a loss function does example weighting) and is a replacement of both. Second, despite that a loss defines an example weighting scheme by its derivative, in the loss design, we need to consider whether it is differentiable. Instead, DM is more flexible by directly modifying the derivative so that a loss can be a non-elementary format too. Technically, DM defines an emphasis density function by a derivative magnitude function. DM is generic in that diverse weighting schemes can be derived. Extensive experiments on both vision and language tasks prove DM's effectiveness.

Xinshao Wang, Elyor Kodirov, Yang Hua, Neil M. Robertson• 2019

Related benchmarks

TaskDatasetResultRank
Image ClassificationClothing1M (test)
Accuracy73.3
546
Image ClassificationCIFAR-10 v1 (test)
Accuracy86
98
Image ClassificationClothing1M
Accuracy72.5
37
Video-to-Video Person Re-identificationMARS (test)
Top-1 Accuracy84.3
22
Image ClassificationCIFAR-100
Test Accuracy (Clean)70.1
17
Sentiment ClassificationIMDB Label Noise r=0.0 (test)
Accuracy89.1
9
Sentiment ClassificationIMDB Label Noise r=0.2 (test)
Accuracy88.7
9
Sentiment ClassificationIMDB Label Noise r=0.4 (test)
Accuracy86.4
9
Sentiment ClassificationIMDB Sample Imbalance 10:1 (test)
Accuracy80.6
9
Sentiment ClassificationIMDB Sample Imbalance 50:1 (test)
Accuracy65
9
Showing 10 of 10 rows

Other info

Code

Follow for update