A Unified Continual Learning Framework with General Parameter-Efficient Tuning
About
The "pre-training $\rightarrow$ downstream adaptation" presents both new opportunities and challenges for Continual Learning (CL). Although the recent state-of-the-art in CL is achieved through Parameter-Efficient-Tuning (PET) adaptation paradigm, only prompt has been explored, limiting its application to Transformers only. In this paper, we position prompting as one instantiation of PET, and propose a unified CL framework with general PET, dubbed as Learning-Accumulation-Ensemble (LAE). PET, e.g., using Adapter, LoRA, or Prefix, can adapt a pre-trained model to downstream tasks with fewer parameters and resources. Given a PET method, our LAE framework incorporates it for CL with three novel designs. 1) Learning: the pre-trained model adapts to the new task by tuning an online PET module, along with our adaptation speed calibration to align different PET modules, 2) Accumulation: the task-specific knowledge learned by the online PET module is accumulated into an offline PET module through momentum update, 3) Ensemble: During inference, we respectively construct two experts with online/offline PET modules (which are favored by the novel/historical tasks) for prediction ensemble. We show that LAE is compatible with a battery of PET methods and gains strong CL capability. For example, LAE with Adaptor PET surpasses the prior state-of-the-art by 1.3% and 3.6% in last-incremental accuracy on CIFAR100 and ImageNet-R datasets, respectively. Code is available at \url{https://github.com/gqk/LAE}.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Class-incremental learning | CIFAR-100 | -- | 234 | |
| Class-incremental learning | ImageNet-R | -- | 103 | |
| Class-incremental learning | CIFAR-100 10 (test) | Average Top-1 Accuracy89.84 | 75 | |
| Continual Learning | CIFAR-100 | Accuracy79.1 | 56 | |
| Class-incremental learning | CUB200 | Last Accuracy80.52 | 39 | |
| Class-incremental learning | ImageNet-R 5-task | -- | 27 | |
| Domain-incremental learning | CORe50 | Avg Accuracy (A)83.09 | 22 | |
| Class-incremental learning | CARS 196 | Last Accuracy55.2 | 22 | |
| Class-incremental learning | CUB-200, Cars-196, CIFAR-100, ImageNet-R | Last Accuracy74.95 | 22 | |
| Class-incremental learning | CORe50 | AVG Acc80.1 | 21 |