Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

PCR: Proxy-based Contrastive Replay for Online Class-Incremental Continual Learning

About

Online class-incremental continual learning is a specific task of continual learning. It aims to continuously learn new classes from data stream and the samples of data stream are seen only once, which suffers from the catastrophic forgetting issue, i.e., forgetting historical knowledge of old classes. Existing replay-based methods effectively alleviate this issue by saving and replaying part of old data in a proxy-based or contrastive-based replay manner. Although these two replay manners are effective, the former would incline to new classes due to class imbalance issues, and the latter is unstable and hard to converge because of the limited number of samples. In this paper, we conduct a comprehensive analysis of these two replay manners and find that they can be complementary. Inspired by this finding, we propose a novel replay-based method called proxy-based contrastive replay (PCR). The key operation is to replace the contrastive samples of anchors with corresponding proxies in the contrastive-based way. It alleviates the phenomenon of catastrophic forgetting by effectively addressing the imbalance issue, as well as keeps a faster convergence of the model. We conduct extensive experiments on three real-world benchmark datasets, and empirical results consistently demonstrate the superiority of PCR over various state-of-the-art methods.

Huiwei Lin, Baoquan Zhang, Shanshan Feng, Xutao Li, Yunming Ye• 2023

Related benchmarks

TaskDatasetResultRank
Continual LearningCIFAR100 Split--
85
Continual LearningCIFAR100 Split 32x32 (test)
Accuracy30.1
66
Continual LearningMiniImageNet Split 84x84 (test)
Accuracy28.4
66
Continual LearningSplit CIFAR10 32x32 (test)
Accuracy59.9
66
Continual LearningCIFAR-100
Accuracy89.1
56
Class-incremental learningFGVC Aircraft
Accuracy Last9
21
Online Continual LearningSplit-ImageNet-S
AFinal38.75
20
Online Continual LearningSplit ImageNet-R
AFinal46.11
20
Continual LearningDTD
Average Performance (Aavg)35
18
Continual LearningCORe50--
14
Showing 10 of 12 rows

Other info

Code

Follow for update