Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Pruned Adaptation Modules: A Simple yet Strong Baseline for Continual Foundation Models

About

The continual learning literature has rapidly shifted from traditional class incremental learning (CIL) techniques to foundation model (FM)-based CIL methods without a clear understanding of how these newer approaches compare to strong, lightweight convolutional baselines. This abrupt transition has created a substantial methodological gap, making it difficult to assess whether recent FM-based CIL progress reflects genuine advances or merely the absence of rigorous baselines. To address this gap, we introduce Pruned Adaptation Modules (PAM), a simple yet effective method that freezes the vast majority of the pre-trained ResNet while enabling scalable continual adaptation through sparse task-specific layers. PAM yields up to a ~5x reduction in trainable parameters and a ~6x reduction in total parameters, significantly reducing the cost of continual updates. Across diverse benchmarks, PAM consistently mitigates catastrophic forgetting and outperforms state-of-the-art FM-based CIL approaches. Our findings position PAM as a strong and transparent baseline that helps bridge the gap between traditional and FM-based CIL, guiding future research for a more accurate assessment of true progress in continual adaptation. The code can be found at: https://github.com/ElifCerenGokYildirim/PAM.

Elif Ceren Gok Yildirim, Murat Onur Yildirim, Joaquin Vanschoren• 2026

Related benchmarks

TaskDatasetResultRank
Class-incremental learningImageNet-R B0 Inc20
Last Accuracy78.95
79
Class-incremental learningCIFAR-100 B0_Inc5
Average Accuracy94.17
47
Class-incremental learningCUB200 Inc10 (test)
Average Accuracy89.91
17
Class-incremental learningCars-196 B0 Inc10 (test)
Avg Accuracy83.1
11
Showing 4 of 4 rows

Other info

Follow for update