Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Preventing Zero-Shot Transfer Degradation in Continual Learning of Vision-Language Models

About

Continual learning (CL) can help pre-trained vision-language models efficiently adapt to new or under-trained data distributions without re-training. Nevertheless, during the continual training of the Contrastive Language-Image Pre-training (CLIP) model, we observe that the model's zero-shot transfer ability significantly degrades due to catastrophic forgetting. Existing CL methods can mitigate forgetting by replaying previous data. However, since the CLIP dataset is private, replay methods cannot access the pre-training dataset. In addition, replaying data of previously learned downstream tasks can enhance their performance but comes at the cost of sacrificing zero-shot performance. To address this challenge, we propose a novel method ZSCL to prevent zero-shot transfer degradation in the continual learning of vision-language models in both feature and parameter space. In the feature space, a reference dataset is introduced for distillation between the current and initial models. The reference dataset should have semantic diversity but no need to be labeled, seen in pre-training, or matched image-text pairs. In parameter space, we prevent a large parameter shift by averaging weights during the training. We propose a more challenging Multi-domain Task Incremental Learning (MTIL) benchmark to evaluate different methods, where tasks are from various domains instead of class-separated in a single dataset. Our method outperforms other methods in the traditional class-incremental learning setting and the MTIL by 9.7% average score. Our code locates at https://github.com/Thunderbeee/ZSCL.

Zangwei Zheng, Mingyuan Ma, Kai Wang, Ziheng Qin, Xiangyu Yue, Yang You• 2023

Related benchmarks

TaskDatasetResultRank
Incremental LearningCIFAR100 10 steps
Final Step Performance73.65
39
Incremental LearningCIFAR100 50 steps
Last Accuracy67.36
36
Class-incremental learningCIFAR100 20 steps (test)
Last Accuracy69.58
21
Class-incremental learningTinyImageNet 5 steps 100 base classes (test)
Avg Score80.27
13
Class-incremental learningTinyImageNet 10 steps 100 base classes (test)
Avg Accuracy78.61
13
Class-incremental learningTinyImageNet 20 steps 100 base classes (test)
Average Accuracy77.18
13
Continual LearningHieraMedTransfer Order I
Transfer Performance57.7
13
Continual LearningMedXtreme (Order I)
ACC53.7
13
Continual LearningMedXtreme (Order II)
Accuracy48.3
13
Continual LearningHieraMedTransfer Order II
Transfer Score45.2
13
Showing 10 of 16 rows

Other info

Follow for update