CLAP4CLIP: Continual Learning with Probabilistic Finetuning for Vision-Language Models
About
Continual learning (CL) aims to help deep neural networks learn new knowledge while retaining what has been learned. Owing to their powerful generalizability, pre-trained vision-language models such as Contrastive Language-Image Pre-training (CLIP) have lately gained traction as practical CL candidates. However, the domain mismatch between the pre-training and the downstream CL tasks often calls for finetuning of the CLIP on the latter. Most existing finetuning methods exhibit deterministic nature. This makes them overlook the many possible interactions across the input modalities and deems them unsafe for high-risk tasks requiring reliable uncertainty estimation. To address these, our work proposes Continual LeArning with Probabilistic finetuning (CLAP) - a probabilistic modeling framework over visual-guided text features per task, thus providing more calibrated CL finetuning. Unlike recent data-hungry anti-forgetting CL techniques, CLAP alleviates forgetting by exploiting the rich pre-trained knowledge of CLIP for weight initialization and distribution regularization of task-specific parameters. Cooperating with the diverse range of existing prompting methods, CLAP can surpass the predominant deterministic finetuning approaches for CL with CLIP. We conclude with out-of-the-box applications of superior uncertainty estimation abilities of CLAP including novel data detection and exemplar selection within the existing CL setups. Our code is available at \url{https://github.com/srvCodes/clap4clip}.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Class-incremental learning | CIFAR-100 | Averaged Incremental Accuracy86.13 | 234 | |
| Class-incremental learning | ImageNet-R | Average Accuracy86.35 | 103 | |
| Class-incremental learning | ImageNet-100 | Avg Acc87.76 | 74 | |
| Continual Learning | CIFAR-100 | -- | 56 | |
| Class-incremental learning | CUB200 | Last Accuracy81.95 | 39 | |
| Class-incremental learning | VTAB | Avg Accuracy92.51 | 31 | |
| Continual Learning | ImageNet-100 (test) | Task 10 Accuracy84.14 | 17 | |
| Cross-dataset Continual Learning | CIFAR100-I2C Transfer from ImageNet100 (test) | -- | 10 | |
| Continual Learning | CUB | Backward Transfer (BwT)-0.037 | 8 | |
| Continual Learning | VTAB | Backward Transfer (BwT)0.041 | 8 |