Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

An Empirical Study of Example Forgetting during Deep Neural Network Learning

About

Inspired by the phenomenon of catastrophic forgetting, we investigate the learning dynamics of neural networks as they train on single classification tasks. Our goal is to understand whether a related phenomenon occurs when data does not undergo a clear distributional shift. We define a `forgetting event' to have occurred when an individual training example transitions from being classified correctly to incorrectly over the course of learning. Across several benchmark data sets, we find that: (i) certain examples are forgotten with high frequency, and some not at all; (ii) a data set's (un)forgettable examples generalize across neural architectures; and (iii) based on forgetting dynamics, a significant fraction of examples can be omitted from the training data set while still maintaining state-of-the-art generalization performance.

Mariya Toneva, Alessandro Sordoni, Remi Tachet des Combes, Adam Trischler, Yoshua Bengio, Geoffrey J. Gordon• 2018

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100 (test)
Accuracy77.38
3518
Image ClassificationCIFAR-10 (test)
Accuracy95.45
3381
Image ClassificationImageNet-1K 1.0 (val)
Top-1 Accuracy78.7
1866
Graph ClassificationMUTAG
Accuracy88.4
697
Image ClassificationFashion MNIST (test)
Accuracy55
568
Image ClassificationCIFAR-10
Accuracy95.36
507
Graph Classificationogbg-molpcba (test)
AP27.9
206
Traffic ForecastingPeMS08
RMSE28.08
166
Image ClassificationTinyImageNet
Accuracy15
108
Spatio-temporal forecastingPEMS08 (test)
MAPE11.29
96
Showing 10 of 27 rows

Other info

Follow for update