Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Orchestrate Latent Expertise: Advancing Online Continual Learning with Multi-Level Supervision and Reverse Self-Distillation

About

To accommodate real-world dynamics, artificial intelligence systems need to cope with sequentially arriving content in an online manner. Beyond regular Continual Learning (CL) attempting to address catastrophic forgetting with offline training of each task, Online Continual Learning (OCL) is a more challenging yet realistic setting that performs CL in a one-pass data stream. Current OCL methods primarily rely on memory replay of old training samples. However, a notable gap from CL to OCL stems from the additional overfitting-underfitting dilemma associated with the use of rehearsal buffers: the inadequate learning of new training samples (underfitting) and the repeated learning of a few old training samples (overfitting). To this end, we introduce a novel approach, Multi-level Online Sequential Experts (MOSE), which cultivates the model as stacked sub-experts, integrating multi-level supervision and reverse self-distillation. Supervision signals across multiple stages facilitate appropriate convergence of the new task while gathering various strengths from experts by knowledge distillation mitigates the performance decline of old tasks. MOSE demonstrates remarkable efficacy in learning new samples and preserving past knowledge through multi-level experts, thereby significantly advancing OCL performance over state-of-the-art baselines (e.g., up to 7.3% on Split CIFAR-100 and 6.1% on Split Tiny-ImageNet).

HongWei Yan, Liyuan Wang, Kaisheng Ma, Yi Zhong• 2024

Related benchmarks

TaskDatasetResultRank
Continual LearningCIFAR100 Split--
85
Continual LearningSplit CIFAR-100 10 tasks
Accuracy55.6
60
Continual LearningTiny-ImageNet Split 100 tasks (test)
AF (%)11.5
60
Continual LearningSplit CIFAR-100 (10 tasks) (test)
Accuracy34.7
60
Online Continual LearningCIFAR-100 (test)
Accuracy55.62
42
Online Continual LearningTiny-ImageNet
Average Forgetting13.94
42
Online Continual LearningTiny ImageNet (test)
Avg Accuracy38.71
42
Online Continual LearningCIFAR10 (test)
Accuracy72.18
28
Online Continual LearningCIFAR10
Average Forgetting19.24
28
Showing 9 of 9 rows

Other info

Follow for update