Don't Stop Learning: Towards Continual Learning for the CLIP Model
About
The Contrastive Language-Image Pre-training (CLIP) Model is a recently proposed large-scale pre-train model which attracts increasing attention in the computer vision community. Benefiting from its gigantic image-text training set, the CLIP model has learned outstanding capabilities in zero-shot learning and image-text matching. To boost the recognition performance of CLIP on some target visual concepts, it is often desirable to further update the CLIP model by fine-tuning some classes-of-interest on extra training data. This operation, however, raises an important concern: will the update hurt the zero-shot learning or image-text matching capability of the CLIP, i.e., the catastrophic forgetting issue? If yes, could existing continual learning algorithms be adapted to alleviate the risk of catastrophic forgetting? To answer these questions, this work conducts a systemic study on the continual learning issue of the CLIP model. We construct evaluation protocols to measure the impact of fine-tuning updates and explore different ways to upgrade existing continual learning methods to mitigate the forgetting issue of the CLIP model. Our study reveals the particular challenges of CLIP continual learning problem and lays a foundation for further researches. Moreover, we propose a new algorithm, dubbed Learning without Forgetting via Replayed Vocabulary (VR-LwF), which shows exact effectiveness for alleviating the forgetting issue of the CLIP model.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Incremental Learning | CIFAR100 10 steps | Final Step Performance70.75 | 39 | |
| Incremental Learning | CIFAR100 50 steps | Last Accuracy59.45 | 36 | |
| Class-incremental learning | CIFAR100 20 steps (test) | Last Accuracy63.54 | 21 | |
| Class-incremental learning | TinyImageNet 5 steps 100 base classes (test) | Avg Score77.56 | 13 | |
| Class-incremental learning | TinyImageNet 10 steps 100 base classes (test) | Avg Accuracy74.12 | 13 | |
| Class-incremental learning | TinyImageNet 20 steps 100 base classes (test) | Average Accuracy69.94 | 13 | |
| Text-to-Video Retrieval | MSRVTT-10 | Recall@124.49 | 12 | |
| Text-to-Video Retrieval | ACTNET-10 | R@118.08 | 12 | |
| Text-to-Video Retrieval | ACTNET-20 | R@117.21 | 12 | |
| Text-to-Video Retrieval | MSRVTT-20 | R@122.39 | 12 |