Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

HiDe-PET: Continual Learning via Hierarchical Decomposition of Parameter-Efficient Tuning

About

The deployment of pre-trained models (PTMs) has greatly advanced the field of continual learning (CL), enabling positive knowledge transfer and resilience to catastrophic forgetting. To sustain these advantages for sequentially arriving tasks, a promising direction involves keeping the pre-trained backbone frozen while employing parameter-efficient tuning (PET) techniques to instruct representation learning. Despite the popularity of Prompt-based PET for CL, its empirical design often leads to sub-optimal performance in our evaluation of different PTMs and target tasks. To this end, we propose a unified framework for CL with PTMs and PET that provides both theoretical and empirical advancements. We first perform an in-depth theoretical analysis of the CL objective in a pre-training context, decomposing it into hierarchical components namely within-task prediction, task-identity inference and task-adaptive prediction. We then present Hierarchical Decomposition PET (HiDe-PET), an innovative approach that explicitly optimizes the decomposed objective through incorporating task-specific and task-shared knowledge via mainstream PET techniques along with efficient recovery of pre-trained representations. Leveraging this framework, we delve into the distinct impacts of implementation strategy, PET technique and PET architecture, as well as adaptive knowledge accumulation amidst pronounced distribution changes. Finally, across various CL scenarios, our approach demonstrates remarkably superior performance over a broad spectrum of recent strong baselines.

Liyuan Wang, Jingyi Xie, Xingxing Zhang, Hang Su, Jun Zhu• 2024

Related benchmarks

TaskDatasetResultRank
Class-incremental learningCIFAR-100--
234
Class-incremental learningImageNet-R--
103
Audio ClassificationESC-50 (test)
Accuracy88.75
84
Audio ClassificationUS8K (test)
R@1 Accuracy0.7803
41
Class-incremental learningCUB200
Last Accuracy88.76
39
Audio ClassificationSpeech Commands V2 (test)
Accuracy33.71
35
Class-incremental learningCUB-200, Cars-196, CIFAR-100, ImageNet-R
Last Accuracy82.24
22
Class-incremental learningCARS 196
Last Accuracy69.65
22
Audio ClassificationVocalSet (test)
Top-1 Accuracy49.67
18
Audio ClassificationTIMIT-2 (test)
Top-1 Accuracy47.3
18
Showing 10 of 11 rows

Other info

Follow for update