Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

A Unified Continual Learning Framework with General Parameter-Efficient Tuning

About

The "pre-training $\rightarrow$ downstream adaptation" presents both new opportunities and challenges for Continual Learning (CL). Although the recent state-of-the-art in CL is achieved through Parameter-Efficient-Tuning (PET) adaptation paradigm, only prompt has been explored, limiting its application to Transformers only. In this paper, we position prompting as one instantiation of PET, and propose a unified CL framework with general PET, dubbed as Learning-Accumulation-Ensemble (LAE). PET, e.g., using Adapter, LoRA, or Prefix, can adapt a pre-trained model to downstream tasks with fewer parameters and resources. Given a PET method, our LAE framework incorporates it for CL with three novel designs. 1) Learning: the pre-trained model adapts to the new task by tuning an online PET module, along with our adaptation speed calibration to align different PET modules, 2) Accumulation: the task-specific knowledge learned by the online PET module is accumulated into an offline PET module through momentum update, 3) Ensemble: During inference, we respectively construct two experts with online/offline PET modules (which are favored by the novel/historical tasks) for prediction ensemble. We show that LAE is compatible with a battery of PET methods and gains strong CL capability. For example, LAE with Adaptor PET surpasses the prior state-of-the-art by 1.3% and 3.6% in last-incremental accuracy on CIFAR100 and ImageNet-R datasets, respectively. Code is available at \url{https://github.com/gqk/LAE}.

Qiankun Gao, Chen Zhao, Yifan Sun, Teng Xi, Gang Zhang, Bernard Ghanem, Jian Zhang• 2023

Related benchmarks

TaskDatasetResultRank
Class-incremental learningCIFAR-100--
234
Class-incremental learningImageNet-R--
103
Class-incremental learningCIFAR-100 10 (test)
Average Top-1 Accuracy89.84
75
Continual LearningCIFAR-100
Accuracy79.1
56
Class-incremental learningCUB200
Last Accuracy80.52
39
Class-incremental learningImageNet-R 5-task--
27
Domain-incremental learningCORe50
Avg Accuracy (A)83.09
22
Class-incremental learningCARS 196
Last Accuracy55.2
22
Class-incremental learningCUB-200, Cars-196, CIFAR-100, ImageNet-R
Last Accuracy74.95
22
Class-incremental learningCORe50
AVG Acc80.1
21
Showing 10 of 32 rows

Other info

Follow for update