Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

AttriCLIP: A Non-Incremental Learner for Incremental Knowledge Learning

About

Continual learning aims to enable a model to incrementally learn knowledge from sequentially arrived data. Previous works adopt the conventional classification architecture, which consists of a feature extractor and a classifier. The feature extractor is shared across sequentially arrived tasks or classes, but one specific group of weights of the classifier corresponding to one new class should be incrementally expanded. Consequently, the parameters of a continual learner gradually increase. Moreover, as the classifier contains all historical arrived classes, a certain size of the memory is usually required to store rehearsal data to mitigate classifier bias and catastrophic forgetting. In this paper, we propose a non-incremental learner, named AttriCLIP, to incrementally extract knowledge of new classes or tasks. Specifically, AttriCLIP is built upon the pre-trained visual-language model CLIP. Its image encoder and text encoder are fixed to extract features from both images and text. Text consists of a category name and a fixed number of learnable parameters which are selected from our designed attribute word bank and serve as attributes. As we compute the visual and textual similarity for classification, AttriCLIP is a non-incremental learner. The attribute prompts, which encode the common knowledge useful for classification, can effectively mitigate the catastrophic forgetting and avoid constructing a replay memory. We evaluate our AttriCLIP and compare it with CLIP-based and previous state-of-the-art continual learning methods in realistic settings with domain-shift and long-sequence learning. The results show that our method performs favorably against previous state-of-the-arts. The implementation code can be available at https://github.com/bhrqw/AttriCLIP.

Runqi Wang, Xiaoyue Duan, Guoliang Kang, Jianzhuang Liu, Shaohui Lin, Songcen Xu, Jinhu Lv, Baochang Zhang• 2023

Related benchmarks

TaskDatasetResultRank
Class-incremental learningCIFAR-100
Averaged Incremental Accuracy79.31
234
Class-incremental learningImageNet-R
Average Accuracy83.09
103
Class-incremental learningImageNet-100
Avg Acc82.29
74
Class-incremental learningCIFAR-100
Average Accuracy79.39
60
Continual LearningCIFAR-100--
56
Class-incremental learningImageNet-R 10-task--
44
Image ClassificationImageNet100 (test)
Top-1 Acc83.3
41
Class-incremental learningCUB200
Last Accuracy52.12
39
Class-incremental learningImageNet-R 20-task
Average Accuracy81.28
33
Class-incremental learningVTAB
Avg Accuracy71.84
31
Showing 10 of 34 rows

Other info

Code

Follow for update