Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Dealing with Cross-Task Class Discrimination in Online Continual Learning

About

Existing continual learning (CL) research regards catastrophic forgetting (CF) as almost the only challenge. This paper argues for another challenge in class-incremental learning (CIL), which we call cross-task class discrimination (CTCD),~i.e., how to establish decision boundaries between the classes of the new task and old tasks with no (or limited) access to the old task data. CTCD is implicitly and partially dealt with by replay-based methods. A replay method saves a small amount of data (replay data) from previous tasks. When a batch of current task data arrives, the system jointly trains the new data and some sampled replay data. The replay data enables the system to partially learn the decision boundaries between the new classes and the old classes as the amount of the saved data is small. However, this paper argues that the replay approach also has a dynamic training bias issue which reduces the effectiveness of the replay data in solving the CTCD problem. A novel optimization objective with a gradient-based adaptive method is proposed to dynamically deal with the problem in the online CL process. Experimental results show that the new method achieves much better results in online CL.

Yiduo Guo, Bing Liu, Dongyan Zhao• 2023

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100 (test)--
3518
Image ClassificationCIFAR-10 (test)--
3381
Image ClassificationCIFAR-10--
507
Image ClassificationTiny ImageNet (test)
Accuracy20.46
265
Image ClassificationImageNet-100 (test)--
109
Image ClassificationTinyImageNet--
108
Continual LearningCIFAR100 Split--
85
Image ClassificationImageNet-100
Accuracy41.03
84
Continual LearningSplit CIFAR-100 10 tasks
Accuracy49.7
60
Continual LearningTiny-ImageNet Split 100 tasks (test)
AF (%)16.9
60
Showing 10 of 25 rows

Other info

Code

Follow for update