Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

A Closer Look at Memorization in Deep Networks

About

We examine the role of memorization in deep learning, drawing connections to capacity, generalization, and adversarial robustness. While deep networks are capable of memorizing noise data, our results suggest that they tend to prioritize learning simple patterns first. In our experiments, we expose qualitative differences in gradient-based optimization of deep neural networks (DNNs) on noise vs. real data. We also demonstrate that for appropriately tuned explicit regularization (e.g., dropout) we can degrade DNN training performance on noise datasets without compromising generalization on real data. Our analysis suggests that the notions of effective capacity which are dataset independent are unlikely to explain the generalization performance of deep networks when trained with gradient based methods because training data itself plays an important role in determining the degree of memorization.

Devansh Arpit, Stanis{\l}aw Jastrz\k{e}bski, Nicolas Ballas, David Krueger, Emmanuel Bengio, Maxinder S. Kanwal, Tegan Maharaj, Asja Fischer, Aaron Courville, Yoshua Bengio, Simon Lacoste-Julien• 2017

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100 (val)
Accuracy61
661
Image ClassificationImageNet ILSVRC-2012 (val)
Top-1 Accuracy62.8
405
Image ClassificationCIFAR-10 (val)
Top-1 Accuracy78
329
Image ClassificationImageNet 2012 (val)
Top-1 Accuracy59
202
Image ClassificationCIFAR-10 v1 (test)
Accuracy76
98
Image ClassificationWebVision 1.0 (val)
Top-1 Acc66.6
59
Image ClassificationCIFAR-10 (test)
Accuracy (Sym. Noise Rate 0.2)86
15
Image ClassificationWebVision 2017 (val)
Top-1 Accuracy66.6
7
Showing 8 of 8 rows

Other info

Follow for update