Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Don't Stop Learning: Towards Continual Learning for the CLIP Model

About

The Contrastive Language-Image Pre-training (CLIP) Model is a recently proposed large-scale pre-train model which attracts increasing attention in the computer vision community. Benefiting from its gigantic image-text training set, the CLIP model has learned outstanding capabilities in zero-shot learning and image-text matching. To boost the recognition performance of CLIP on some target visual concepts, it is often desirable to further update the CLIP model by fine-tuning some classes-of-interest on extra training data. This operation, however, raises an important concern: will the update hurt the zero-shot learning or image-text matching capability of the CLIP, i.e., the catastrophic forgetting issue? If yes, could existing continual learning algorithms be adapted to alleviate the risk of catastrophic forgetting? To answer these questions, this work conducts a systemic study on the continual learning issue of the CLIP model. We construct evaluation protocols to measure the impact of fine-tuning updates and explore different ways to upgrade existing continual learning methods to mitigate the forgetting issue of the CLIP model. Our study reveals the particular challenges of CLIP continual learning problem and lays a foundation for further researches. Moreover, we propose a new algorithm, dubbed Learning without Forgetting via Replayed Vocabulary (VR-LwF), which shows exact effectiveness for alleviating the forgetting issue of the CLIP model.

Yuxuan Ding, Lingqiao Liu, Chunna Tian, Jingyuan Yang, Haoxuan Ding• 2022

Related benchmarks

TaskDatasetResultRank
Incremental LearningCIFAR100 10 steps
Final Step Performance70.75
39
Incremental LearningCIFAR100 50 steps
Last Accuracy59.45
36
Class-incremental learningCIFAR100 20 steps (test)
Last Accuracy63.54
21
Class-incremental learningTinyImageNet 5 steps 100 base classes (test)
Avg Score77.56
13
Class-incremental learningTinyImageNet 10 steps 100 base classes (test)
Avg Accuracy74.12
13
Class-incremental learningTinyImageNet 20 steps 100 base classes (test)
Average Accuracy69.94
13
Text-to-Video RetrievalMSRVTT-10
Recall@124.49
12
Text-to-Video RetrievalACTNET-10
R@118.08
12
Text-to-Video RetrievalACTNET-20
R@117.21
12
Text-to-Video RetrievalMSRVTT-20
R@122.39
12
Showing 10 of 12 rows

Other info

Follow for update