Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Do Your Best and Get Enough Rest for Continual Learning

About

According to the forgetting curve theory, we can enhance memory retention by learning extensive data and taking adequate rest. This means that in order to effectively retain new knowledge, it is essential to learn it thoroughly and ensure sufficient rest so that our brain can memorize without forgetting. The main takeaway from this theory is that learning extensive data at once necessitates sufficient rest before learning the same data again. This aspect of human long-term memory retention can be effectively utilized to address the continual learning of neural networks. Retaining new knowledge for a long period of time without catastrophic forgetting is the critical problem of continual learning. Therefore, based on Ebbinghaus' theory, we introduce the view-batch model that adjusts the learning schedules to optimize the recall interval between retraining the same samples. The proposed view-batch model allows the network to get enough rest to learn extensive knowledge from the same samples with a recall interval of sufficient length. To this end, we specifically present two approaches: 1) a replay method that guarantees the optimal recall interval, and 2) a self-supervised learning that acquires extensive knowledge from a single training sample at a time. We empirically show that these approaches of our method are aligned with the forgetting curve theory, which can enhance long-term memory. In our experiments, we also demonstrate that our method significantly improves many state-of-the-art continual learning methods in various protocols and scenarios. We open-source this project at https://github.com/hankyul2/ViewBatchModel.

Hankyul Kang, Gregor Seifer, Donghyun Lee, Jongbin Ryu• 2025

Related benchmarks

TaskDatasetResultRank
Class-incremental learningCIFAR-100 10T
Avg Accuracy (A_T)78.02
35
Class-incremental learningCIFAR-100 T=20 (test)
Final Accuracy67.16
25
Domain-incremental learningDomainNet (test)
Average Accuracy42.16
25
Continual LearningS-CIFAR-10 (last)
TIL Last Top-1 Acc0.9707
24
Continual LearningS-Tiny-ImageNet (last)
TIL Last Top-1 Acc68.91
24
Class-incremental learningS-CIFAR-100 10 Step
Avg Top-1 Acc78.12
19
Class-incremental learningCIFAR100-B0 5 steps (non-rehearsal)
Average Metric79.23
18
Class-incremental learningCUB200 (20T)
Last Accuracy35.12
15
Class-incremental learningPath16 Order I 1.0 (train test)
Last Accuracy66.25
15
Class-incremental learningPath16 Order II 1.0 (train test)
Last Accuracy65.78
15
Showing 10 of 17 rows

Other info

Code

Follow for update