Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Forget Less, Retain More: A Lightweight Regularizer for Rehearsal-Based Continual Learning

About

Deep neural networks suffer from catastrophic forgetting, where performance on previous tasks degrades after training on a new task. This issue arises due to the model's tendency to overwrite previously acquired knowledge with new information. We present a novel approach to address this challenge, focusing on the intersection of memory-based methods and regularization approaches. We formulate a regularization strategy, termed Information Maximization (IM) regularizer, for memory-based continual learning methods, which is based exclusively on the expected label distribution, thus making it class-agnostic. As a consequence, IM regularizer can be directly integrated into various rehearsal-based continual learning methods, reducing forgetting and favoring faster convergence. Our empirical validation shows that, across datasets and regardless of the number of tasks, our proposed regularization strategy consistently improves baseline performance at the expense of a minimal computational overhead. The lightweight nature of IM ensures that it remains a practical and scalable solution, making it applicable to real-world continual learning scenarios where efficiency is paramount. Finally, we demonstrate the data-agnostic nature of our regularizer by applying it to video data, which presents additional challenges due to its temporal structure and higher memory requirements. Despite the significant domain gap, our experiments show that IM regularizer also improves the performance of video continual learning methods.

Lama Alssum, Hasan Abed Al Kader Hammoud, Motasem Alfarra, Juan C Leon Alcazar, Bernard Ghanem• 2025

Related benchmarks

TaskDatasetResultRank
Video Action RecognitionUCF101
Top-1 Acc83.53
153
Continual LearningCIFAR100 Split
Average Per-Task Accuracy55.1
85
Continual LearningImageNet Split Tiny
Avg Accuracy38.7
57
Continual LearningTiny ImageNet Split
Forgetting Rate26
57
Action RecognitionActivityNet
Accuracy55.2
22
Incremental LearningUCF101
Forgetting Rate10.54
6
Incremental LearningActivityNet
Forgetting Rate15.48
6
Showing 7 of 7 rows

Other info

Follow for update