Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Memory-efficient Continual Learning with Prototypical Exemplar Condensation

About

Rehearsal-based continual learning (CL) mitigates catastrophic forgetting by maintaining a subset of samples from previous tasks for replay. Existing studies primarily focus on optimizing memory storage through coreset selection strategies. While these methods are effective, they typically require storing a substantial number of samples per class (SPC), often exceeding 20, to maintain satisfactory performance. In this work, we propose to further compress the memory footprint by synthesizing and storing prototypical exemplars, which can form representative prototypes when passed through the feature extractor. Owing to their representative nature, these exemplars enable the model to retain previous knowledge using only a small number of samples while preserving privacy. Moreover, we introduce a perturbation-based augmentation mechanism that generates synthetic variants of previous data during training, thereby enhancing CL performance. Extensive evaluations on widely used benchmark datasets and settings demonstrate that the proposed algorithm achieves superior performance compared to existing baselines, particularly in scenarios involving large-scale datasets and a high number of tasks.

Minh-Duong Nguyen, Thien-Thanh Dao, Le-Tuan Nguyen, Dung D. Le, Kok-Seng Wong• 2026

Related benchmarks

TaskDatasetResultRank
Incremental LearningCIFAR100 T=50
Last Accuracy88.49
19
Online Continual LearningS-CIFAR-100 T=10 15
Last Accuracy (AT)37.51
15
Online Continual LearningS-CIFAR-100 15 (T=50)
Last Accuracy (AT)24.11
15
Online Continual LearningS-TinyImageNet T=20 25
Last Accuracy (AT)27.11
15
Online Continual LearningS-ImageNet-1K T=100 15
Last Accuracy (AT)14.22
15
Task-Incremental LearningS-CIFAR-100 T=10
Average Accuracy (A-bar)58.46
15
Task-Incremental LearningS-TinyImageNet T=20
Average Accuracy (A-bar)34.46
15
Task-Incremental LearningS-ImageNet-1K T=100
Average Accuracy (A-bar)22.18
15
Continual LearningCIFAR-100
Training Time (Hours)6.58
13
Task-Incremental LearningS-ImageNet-1K 100 tasks 1.0 (test)
Training Time (Hours)252.7
13
Showing 10 of 12 rows

Other info

Follow for update