Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Continual Learning with Node-Importance based Adaptive Group Sparse Regularization

About

We propose a novel regularization-based continual learning method, dubbed as Adaptive Group Sparsity based Continual Learning (AGS-CL), using two group sparsity-based penalties. Our method selectively employs the two penalties when learning each node based its the importance, which is adaptively updated after learning each new task. By utilizing the proximal gradient descent method for learning, the exact sparsity and freezing of the model is guaranteed, and thus, the learner can explicitly control the model capacity as the learning continues. Furthermore, as a critical detail, we re-initialize the weights associated with unimportant nodes after learning each task in order to prevent the negative transfer that causes the catastrophic forgetting and facilitate efficient learning of new tasks. Throughout the extensive experimental results, we show that our AGS-CL uses much less additional memory space for storing the regularization parameters, and it significantly outperforms several state-of-the-art baselines on representative continual learning benchmarks for both supervised and reinforcement learning tasks.

Sangwon Jung, Hongjoon Ahn, Sungmin Cha, Taesup Moon• 2020

Related benchmarks

TaskDatasetResultRank
Continual LearningSplit CIFAR-100 20 tasks
Mean Test Accuracy27.6
26
Continual LearningSequential Omniglot (S-OMNIGLOT) (test)
Accuracy82.8
12
Continual LearningSplit CIFAR-100 5 tasks
Mean Test Accuracy64.1
7
Showing 3 of 3 rows

Other info

Follow for update