Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

CLAP4CLIP: Continual Learning with Probabilistic Finetuning for Vision-Language Models

About

Continual learning (CL) aims to help deep neural networks learn new knowledge while retaining what has been learned. Owing to their powerful generalizability, pre-trained vision-language models such as Contrastive Language-Image Pre-training (CLIP) have lately gained traction as practical CL candidates. However, the domain mismatch between the pre-training and the downstream CL tasks often calls for finetuning of the CLIP on the latter. Most existing finetuning methods exhibit deterministic nature. This makes them overlook the many possible interactions across the input modalities and deems them unsafe for high-risk tasks requiring reliable uncertainty estimation. To address these, our work proposes Continual LeArning with Probabilistic finetuning (CLAP) - a probabilistic modeling framework over visual-guided text features per task, thus providing more calibrated CL finetuning. Unlike recent data-hungry anti-forgetting CL techniques, CLAP alleviates forgetting by exploiting the rich pre-trained knowledge of CLIP for weight initialization and distribution regularization of task-specific parameters. Cooperating with the diverse range of existing prompting methods, CLAP can surpass the predominant deterministic finetuning approaches for CL with CLIP. We conclude with out-of-the-box applications of superior uncertainty estimation abilities of CLAP including novel data detection and exemplar selection within the existing CL setups. Our code is available at \url{https://github.com/srvCodes/clap4clip}.

Saurav Jha, Dong Gong, Lina Yao• 2024

Related benchmarks

TaskDatasetResultRank
Class-incremental learningCIFAR-100
Averaged Incremental Accuracy86.13
234
Class-incremental learningImageNet-R
Average Accuracy86.35
103
Class-incremental learningImageNet-100
Avg Acc87.76
74
Continual LearningCIFAR-100--
56
Class-incremental learningCUB200
Last Accuracy81.95
39
Class-incremental learningVTAB
Avg Accuracy92.51
31
Continual LearningImageNet-100 (test)
Task 10 Accuracy84.14
17
Cross-dataset Continual LearningCIFAR100-I2C Transfer from ImageNet100 (test)--
10
Continual LearningCUB
Backward Transfer (BwT)-0.037
8
Continual LearningVTAB
Backward Transfer (BwT)0.041
8
Showing 10 of 18 rows

Other info

Code

Follow for update