Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

mixup: Beyond Empirical Risk Minimization

About

Large deep neural networks are powerful, but exhibit undesirable behaviors such as memorization and sensitivity to adversarial examples. In this work, we propose mixup, a simple learning principle to alleviate these issues. In essence, mixup trains a neural network on convex combinations of pairs of examples and their labels. By doing so, mixup regularizes the neural network to favor simple linear behavior in-between training examples. Our experiments on the ImageNet-2012, CIFAR-10, CIFAR-100, Google commands and UCI datasets show that mixup improves the generalization of state-of-the-art neural network architectures. We also find that mixup reduces the memorization of corrupt labels, increases the robustness to adversarial examples, and stabilizes the training of generative adversarial networks.

Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, David Lopez-Paz• 2017

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100 (test)
Accuracy77.57
3518
Image ClassificationCIFAR-10 (test)
Accuracy95.6
3381
Image ClassificationImageNet-1k (val)
Top-1 Accuracy79.9
1453
Image ClassificationImageNet (val)
Top-1 Acc77.9
1206
Object Hallucination EvaluationPOPE--
935
Image ClassificationCIFAR-10 (test)--
906
Image ClassificationImageNet-1k (val)
Top-1 Accuracy77.78
840
Object DetectionPASCAL VOC 2007 (test)
mAP73.9
821
Image ClassificationCIFAR-100 (val)
Accuracy81.64
661
Image ClassificationCIFAR-100
Top-1 Accuracy76.35
622
Showing 10 of 487 rows
...

Other info

Code

Follow for update