Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

IDER: IDempotent Experience Replay for Reliable Continual Learning

About

Catastrophic forgetting, the tendency of neural networks to forget previously learned knowledge when learning new tasks, has been a major challenge in continual learning (CL). To tackle this challenge, CL methods have been proposed and shown to reduce forgetting. Furthermore, CL models deployed in mission-critical settings can benefit from uncertainty awareness by calibrating their predictions to reliably assess their confidences. However, existing uncertainty-aware continual learning methods suffer from high computational overhead and incompatibility with mainstream replay methods. To address this, we propose idempotent experience replay (IDER), a novel approach based on the idempotent property where repeated function applications yield the same output. Specifically, we first adapt the training loss to make model idempotent on current data streams. In addition, we introduce an idempotence distillation loss. We feed the output of the current model back into the old checkpoint and then minimize the distance between this reprocessed output and the original output of the current model. This yields a simple and effective new baseline for building reliable continual learners, which can be seamlessly integrated with other CL approaches. Extensive experiments on different CL benchmarks demonstrate that IDER consistently improves prediction reliability while simultaneously boosting accuracy and reducing forgetting. Our results suggest the potential of idempotence as a promising principle for deploying efficient and trustworthy continual learning systems in real-world applications.Our code is available at https://github.com/YutingLi0606/Idempotent-Continual-Learning.

Zhanwang Liu, Yuting Li, Haoyuan Gao, Yexin Li, Linghe Kong, Lichao Sun, Weiran Huang• 2026

Related benchmarks

TaskDatasetResultRank
Continual LearningCIFAR100 (test)
Mean Accuracy57.74
62
Continual LearningCIFAR-10 (test)
Final Average Accuracy (FAA)76.65
31
Continual LearningCIFAR-10 Split 5 sequential tasks (test)
Final Forgetting (FF)11.93
24
Continual LearningCIFAR-100 Split 10 sequential tasks (test)
Final Forgetting (FF)12.52
24
Continual LearningTinyImageNet Split 10 sequential tasks (test)
Final Forgetting16.68
24
Continual LearningGCIL-CIFAR-100 Uniform
Final Average Accuracy (FAA)40.54
17
Continual LearningGCIL-CIFAR-100 Longtail
Final Average Accuracy (FAA)36.75
17
Continual LearningCIFAR-10
ECE8.63
15
Continual LearningCIFAR-100
ECE8.29
15
Continual LearningTiny-ImageNet
ECE6.35
14
Showing 10 of 10 rows

Other info

Follow for update