RanPAC: Random Projections and Pre-trained Models for Continual Learning
About
Continual learning (CL) aims to incrementally learn different tasks (such as classification) in a non-stationary data stream without forgetting old ones. Most CL works focus on tackling catastrophic forgetting under a learning-from-scratch paradigm. However, with the increasing prominence of foundation models, pre-trained models equipped with informative representations have become available for various downstream requirements. Several CL methods based on pre-trained models have been explored, either utilizing pre-extracted features directly (which makes bridging distribution gaps challenging) or incorporating adaptors (which may be subject to forgetting). In this paper, we propose a concise and effective approach for CL with pre-trained models. Given that forgetting occurs during parameter updating, we contemplate an alternative approach that exploits training-free random projectors and class-prototype accumulation, which thus bypasses the issue. Specifically, we inject a frozen Random Projection layer with nonlinear activation between the pre-trained model's feature representations and output head, which captures interactions between features with expanded dimensionality, providing enhanced linear separability for class-prototype-based CL. We also demonstrate the importance of decorrelating the class-prototypes to reduce the distribution disparity when using pre-trained representations. These techniques prove to be effective and circumvent the problem of forgetting for both class- and domain-incremental continual learning. Compared to previous methods applied to pre-trained ViT-B/16 models, we reduce final error rates by between 20% and 62% on seven class-incremental benchmarks, despite not using any rehearsal memory. We conclude that the full potential of pre-trained models for simple, effective, and fast CL has not hitherto been fully tapped. Code is at github.com/RanPAC/RanPAC.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Class-incremental learning | CIFAR-100 | Averaged Incremental Accuracy94 | 234 | |
| Class-incremental learning | ImageNet-R | Average Accuracy82.98 | 103 | |
| Class-incremental learning | ImageNet A | Average Accuracy69.32 | 86 | |
| Continual Learning | CIFAR100 Split | Average Per-Task Accuracy92.2 | 85 | |
| Audio Classification | ESC-50 (test) | Accuracy92.5 | 84 | |
| Class-incremental learning | CIFAR-100 10 (test) | Average Top-1 Accuracy92.2 | 75 | |
| Image Classification | CIFAR-100 Split | Accuracy92.2 | 61 | |
| Class-incremental learning | CIFAR-100 | Average Accuracy92.4 | 60 | |
| Class-incremental learning | CUB | Avg Accuracy90.6 | 45 | |
| Class-incremental learning | ImageNet-R 10-task | FAA77.9 | 44 |