Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Continual Learning with Scaled Gradient Projection

About

In neural networks, continual learning results in gradient interference among sequential tasks, leading to catastrophic forgetting of old tasks while learning new ones. This issue is addressed in recent methods by storing the important gradient spaces for old tasks and updating the model orthogonally during new tasks. However, such restrictive orthogonal gradient updates hamper the learning capability of the new tasks resulting in sub-optimal performance. To improve new learning while minimizing forgetting, in this paper we propose a Scaled Gradient Projection (SGP) method, where we combine the orthogonal gradient projections with scaled gradient steps along the important gradient spaces for the past tasks. The degree of gradient scaling along these spaces depends on the importance of the bases spanning them. We propose an efficient method for computing and accumulating importance of these bases using the singular value decomposition of the input representations for each task. We conduct extensive experiments ranging from continual image classification to reinforcement learning tasks and report better performance with less training overhead than the state-of-the-art approaches.

Gobinda Saha, Kaushik Roy• 2023

Related benchmarks

TaskDatasetResultRank
Depth EstimationNYU v2 (test)--
432
Semantic segmentationNYU v2 (test)
mIoU21.34
282
Surface Normal EstimationNYU v2 (test)
Mean Angle Distance (MAD)28.27
224
Depth EstimationCityscapes
Abs. Err.0.0202
53
Continual Image ClassificationMiniImageNet Split
Accuracy68.5
42
Depth EstimationCityscapes
Absolute Error0.0157
34
Continual Image ClassificationCIFAR100 Split
Accuracy76.05
30
Semantic segmentationCityscapes
mIoU65.53
26
Continual LearningOL-CIFAR100 (Tasks 0-6)
Accuracy (%)75
23
Continual Image Classification5-Datasets
Accuracy (%)90.42
23
Showing 10 of 28 rows

Other info

Follow for update