Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Regularizing with Pseudo-Negatives for Continual Self-Supervised Learning

About

We introduce a novel Pseudo-Negative Regularization (PNR) framework for effective continual self-supervised learning (CSSL). Our PNR leverages pseudo-negatives obtained through model-based augmentation in a way that newly learned representations may not contradict what has been learned in the past. Specifically, for the InfoNCE-based contrastive learning methods, we define symmetric pseudo-negatives obtained from current and previous models and use them in both main and regularization loss terms. Furthermore, we extend this idea to non-contrastive learning methods which do not inherently rely on negatives. For these methods, a pseudo-negative is defined as the output from the previous model for a differently augmented version of the anchor sample and is asymmetrically applied to the regularization term. Extensive experimental results demonstrate that our PNR framework achieves state-of-the-art performance in representation learning during CSSL by effectively balancing the trade-off between plasticity and stability.

Sungmin Cha, Kyunghyun Cho, Taesup Moon• 2023

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100 Class-IL (5T)
Accuracy63.19
32
Data-Incremental LearningImageNet-100 (5T)
Accuracy76.67
20
Domain-incremental learningDomainNet 6T
A_T53.8
20
Class-incremental learningImageNet-100
Avg Inc Acc (General)67.85
20
Class-incremental learningCIFAR-100 10T
Avg Accuracy (A_T)59.29
20
Class-incremental learningImageNet-100 (10T)
Average Accuracy (A_T)60.75
20
Class-incremental learningImageNet 1k (test)
Avg Accuracy66.12
17
Data-Incremental LearningImageNet-100 (10T)
A_T67.83
15
Class-incremental learning10% Supervised Dataset (test)
Accuracy61.74
6
Class-incremental learning1% Supervised Dataset (test)
Accuracy46.48
6
Showing 10 of 16 rows

Other info

Follow for update