Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Gradient Projection Memory for Continual Learning

About

The ability to learn continually without forgetting the past tasks is a desired attribute for artificial learning systems. Existing approaches to enable such learning in artificial neural networks usually rely on network growth, importance based weight update or replay of old data from the memory. In contrast, we propose a novel approach where a neural network learns new tasks by taking gradient steps in the orthogonal direction to the gradient subspaces deemed important for the past tasks. We find the bases of these subspaces by analyzing network representations (activations) after learning each task with Singular Value Decomposition (SVD) in a single shot manner and store them in the memory as Gradient Projection Memory (GPM). With qualitative and quantitative analyses, we show that such orthogonal gradient descent induces minimum to no interference with the past tasks, thereby mitigates forgetting. We evaluate our algorithm on diverse image classification datasets with short and long sequences of tasks and report better or on-par performance compared to the state-of-the-art approaches.

Gobinda Saha, Isha Garg, Kaushik Roy• 2021

Related benchmarks

TaskDatasetResultRank
Language UnderstandingMMLU
Accuracy15.45
756
ReasoningBBH--
507
Physical Commonsense ReasoningPIQA
Accuracy53.48
329
Continual LearningCIFAR-100 (10-split)
ACC72.48
42
Continual Image ClassificationMiniImageNet Split
Accuracy66.26
29
Continual LearningOL-CIFAR100 (Tasks 0-6)
Accuracy (%)71.62
23
Continual LearningMNIST permuted
AT93.91
19
Continual Image ClassificationCIFAR100 Split
Accuracy72.06
17
Continual Learning5-dataset
Accuracy91.22
16
Lifelong LearningSplit miniImageNet (test)
Accuracy60.41
15
Showing 10 of 23 rows

Other info

Follow for update