Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Revisiting Class-Incremental Learning with Pre-Trained Models: Generalizability and Adaptivity are All You Need

About

Class-incremental learning (CIL) aims to adapt to emerging new classes without forgetting old ones. Traditional CIL models are trained from scratch to continually acquire knowledge as data evolves. Recently, pre-training has achieved substantial progress, making vast pre-trained models (PTMs) accessible for CIL. Contrary to traditional methods, PTMs possess generalizable embeddings, which can be easily transferred for CIL. In this work, we revisit CIL with PTMs and argue that the core factors in CIL are adaptivity for model updating and generalizability for knowledge transferring. 1) We first reveal that frozen PTM can already provide generalizable embeddings for CIL. Surprisingly, a simple baseline (SimpleCIL) which continually sets the classifiers of PTM to prototype features can beat state-of-the-art even without training on the downstream task. 2) Due to the distribution gap between pre-trained and downstream datasets, PTM can be further cultivated with adaptivity via model adaptation. We propose AdaPt and mERge (APER), which aggregates the embeddings of PTM and adapted models for classifier construction. APER is a general framework that can be orthogonally combined with any parameter-efficient tuning method, which holds the advantages of PTM's generalizability and adapted model's adaptivity. 3) Additionally, considering previous ImageNet-based benchmarks are unsuitable in the era of PTM due to data overlapping, we propose four new benchmarks for assessment, namely ImageNet-A, ObjectNet, OmniBenchmark, and VTAB. Extensive experiments validate the effectiveness of APER with a unified and concise framework. Code is available at https://github.com/zhoudw-zdw/RevisitingCIL

Da-Wei Zhou, Zi-Wen Cai, Han-Jia Ye, De-Chuan Zhan, Ziwei Liu• 2023

Related benchmarks

TaskDatasetResultRank
Class-incremental learningCIFAR-100
Averaged Incremental Accuracy92.18
234
Class-incremental learningImageNet-R
Average Accuracy75.82
103
Class-incremental learningImageNet A
Average Accuracy60.53
86
Continual LearningCIFAR100 Split
Average Per-Task Accuracy87.29
85
Class-incremental learningCIFAR-100 10 (test)
Average Top-1 Accuracy87.6
75
Image ClassificationCIFAR-100 Split
Accuracy87.29
61
Class-incremental learningCIFAR-100
Average Accuracy88.5
60
Class-incremental learningCUB
Avg Accuracy87.3
45
Class-incremental learningImageNet-R 10-task
FAA76.71
44
Class-incremental learningObjectNet
Average Accuracy67.18
40
Showing 10 of 53 rows

Other info

Follow for update