Regularizing with Pseudo-Negatives for Continual Self-Supervised Learning
About
We introduce a novel Pseudo-Negative Regularization (PNR) framework for effective continual self-supervised learning (CSSL). Our PNR leverages pseudo-negatives obtained through model-based augmentation in a way that newly learned representations may not contradict what has been learned in the past. Specifically, for the InfoNCE-based contrastive learning methods, we define symmetric pseudo-negatives obtained from current and previous models and use them in both main and regularization loss terms. Furthermore, we extend this idea to non-contrastive learning methods which do not inherently rely on negatives. For these methods, a pseudo-negative is defined as the output from the previous model for a differently augmented version of the anchor sample and is asymmetrically applied to the regularization term. Extensive experimental results demonstrate that our PNR framework achieves state-of-the-art performance in representation learning during CSSL by effectively balancing the trade-off between plasticity and stability.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Image Classification | CIFAR-100 Class-IL (5T) | Accuracy63.19 | 32 | |
| Data-Incremental Learning | ImageNet-100 (5T) | Accuracy76.67 | 20 | |
| Domain-incremental learning | DomainNet 6T | A_T53.8 | 20 | |
| Class-incremental learning | ImageNet-100 | Avg Inc Acc (General)67.85 | 20 | |
| Class-incremental learning | CIFAR-100 10T | Avg Accuracy (A_T)59.29 | 20 | |
| Class-incremental learning | ImageNet-100 (10T) | Average Accuracy (A_T)60.75 | 20 | |
| Class-incremental learning | ImageNet 1k (test) | Avg Accuracy66.12 | 17 | |
| Data-Incremental Learning | ImageNet-100 (10T) | A_T67.83 | 15 | |
| Class-incremental learning | 10% Supervised Dataset (test) | Accuracy61.74 | 6 | |
| Class-incremental learning | 1% Supervised Dataset (test) | Accuracy46.48 | 6 |