Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Online Continual Learning with Maximally Interfered Retrieval

About

Continual learning, the setting where a learning agent is faced with a never ending stream of data, continues to be a great challenge for modern machine learning systems. In particular the online or "single-pass through the data" setting has gained attention recently as a natural setting that is difficult to tackle. Methods based on replay, either generative or from a stored memory, have been shown to be effective approaches for continual learning, matching or exceeding the state of the art in a number of standard benchmarks. These approaches typically rely on randomly selecting samples from the replay memory or from a generative model, which is suboptimal. In this work, we consider a controlled sampling of memories for replay. We retrieve the samples which are most interfered, i.e. whose prediction will be most negatively impacted by the foreseen parameters update. We show a formulation for this sampling criterion in both the generative replay and the experience replay setting, producing consistent gains in performance and greatly reduced forgetting. We release an implementation of our method at https://github.com/optimass/Maximally_Interfered_Retrieval.

Rahaf Aljundi, Lucas Caccia, Eugene Belilovsky, Massimo Caccia, Min Lin, Laurent Charlin, Tinne Tuytelaars• 2019

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K
Accuracy72.3
1362
Mathematical ReasoningMATH
Accuracy30.4
882
Question AnsweringSciQ
Accuracy96.2
283
Reading ComprehensionBoolQ
Accuracy89
279
Question AnsweringARC
Accuracy63.7
230
Continual LearningCIFAR100 Split--
85
Continual LearningSplit CIFAR10 32x32 (test)
Accuracy47.4
66
Continual LearningCIFAR100 Split 32x32 (test)
Accuracy21.6
66
Continual LearningMiniImageNet Split 84x84 (test)
Accuracy17.2
66
Text ClassificationAGNews
Accuracy79.4
61
Showing 10 of 63 rows

Other info

Follow for update