Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Overcoming catastrophic forgetting in neural networks

About

The ability to learn tasks in a sequential fashion is crucial to the development of artificial intelligence. Neural networks are not, in general, capable of this and it has been widely thought that catastrophic forgetting is an inevitable feature of connectionist models. We show that it is possible to overcome this limitation and train networks that can maintain expertise on tasks which they have not experienced for a long time. Our approach remembers old tasks by selectively slowing down learning on the weights important for those tasks. We demonstrate our approach is scalable and effective by solving a set of classification tasks based on the MNIST hand written digit dataset and by learning several Atari 2600 games sequentially.

James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A. Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, Demis Hassabis, Claudia Clopath, Dharshan Kumaran, Raia Hadsell• 2016

Related benchmarks

TaskDatasetResultRank
Language UnderstandingMMLU
Accuracy14.27
756
Mathematical ReasoningMATH
Accuracy3.68
643
ReasoningBBH
Accuracy25.02
507
Mathematical ReasoningSVAMP
Accuracy41.5
368
Physical Commonsense ReasoningPIQA
Accuracy51.85
329
Image ClassificationCIFAR-100
Accuracy59.6
302
Generalized Zero-Shot LearningCUB--
250
Image ClassificationDomainNet (test)--
209
Semantic segmentationPASCAL VOC 2012
mIoU26.3
187
Generalized Zero-Shot LearningSUN--
184
Showing 10 of 467 rows
...

Other info

Follow for update