Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Mind the Gap: Preserving and Compensating for the Modality Gap in CLIP-Based Continual Learning

About

Continual learning aims to enable models to learn sequentially from continuously incoming data while retaining performance on previously learned tasks. With the Contrastive Language-Image Pre-trained model (CLIP) exhibiting strong capabilities across various downstream tasks, there has been growing interest in leveraging CLIP for continual learning in such scenarios. Most existing works overlook the inherent modality gap in CLIP, a key factor in its generalization and adaptability. In this paper, we analyze the variations in the modality gap during the fine-tuning of vision-language pre-trained models. Our observations reveal that the modality gap effectively reflects the extent to which pre-trained knowledge is preserved. Based on these insights, we propose a simple yet effective method, MG-CLIP, that improves CLIP's performance in class-incremental learning. Our approach leverages modality gap preservation to mitigate forgetting and modality gap compensation to enhance the capacity for new data, introducing a novel modality-gap-based perspective for continual learning. Extensive experiments on multiple benchmarks demonstrate that our method outperforms existing approaches without requiring additional replay data. Our code is available at https://github.com/linlany/MindtheGap.

Linlan Huang, Xusheng Cao, Haori Lu, Yifan Meng, Fei Yang, Xialei Liu• 2025

Related benchmarks

TaskDatasetResultRank
Image ClassificationFood101
Accuracy85.7
457
Class-incremental learningCUB200 10 Tasks
FN (Final Acc)72
59
Class-incremental learningImageNet-R 10-task--
54
Image ClassificationImageNet 1k (full)
Top-1 Acc67.3
53
Class-incremental learningCIFAR100 10 Tasks
Accuracy79.4
36
Multi-label class-incremental learningPASCAL VOC B0-C4
Last mAP85.4
33
Multi-label class-incremental learningPASCAL VOC B10-C2
Last mAP86.4
31
Class-incremental learningCIFAR100 rho=0.1 (test)
Alast Accuracy75.6
28
Class-incremental learningCIFAR100 rho=0.01 (test)
Alast62.9
28
Multi-label class-incremental learningMS-COCO 2014 (B0-C10)
Avg. mAP77.8
28
Showing 10 of 26 rows

Other info

Follow for update