Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Representation Finetuning for Continual Learning

About

The world is inherently dynamic, and continual learning aims to enable models to adapt to ever-evolving data streams. While pre-trained models have shown powerful performance in continual learning, they still require finetuning to adapt effectively to downstream tasks. However, prevailing Parameter-Efficient Fine-Tuning (PEFT) methods operate through empirical, black-box optimization at the weight level. These approaches lack explicit control over representation drift, leading to sensitivity to domain shifts and catastrophic forgetting in continual learning scenarios. In this work, we introduce Continual Representation Learning (CoRe), a novel framework that for the first time shifts the finetuning paradigm from weight space to representation space. Unlike conventional methods, CoRe performs task-specific interventions within a low-rank linear subspace of hidden representations, adopting a learning process with explicit objectives, which ensures stability for past tasks while maintaining plasticity for new ones. By constraining updates to a low-rank subspace, CoRe achieves exceptional parameter efficiency. Extensive experiments across multiple continual learning benchmarks demonstrate that CoRe not only preserves parameter efficiency but also significantly outperforms existing state-of-the-art methods. Our work introduces representation finetuning as a new, more effective and interpretable paradigm for continual learning.

Haihua Luo, Xuming Ran, Tommi K\"arkk\"ainen, Huiyan Xue, Zhonghua Chen, Qi Xu, Fengyu Cong• 2026

Related benchmarks

TaskDatasetResultRank
Domain-incremental learningCORe50
Avg Accuracy (A)77.95
49
Class-incremental learningOmniBenchmark B0 Inc30
Last Accuracy75.04
28
Class-incremental learningFGVC Aircraft
Accuracy Last74.89
21
Class-incremental learningCIFAR-100 B5 Inc5
Avg Performance (A-bar)91.32
18
Continual LearningDTD
Average Performance (Aavg)92.71
18
Class-incremental learningCUB200 Inc10 (test)
Average Accuracy92.51
17
Class-incremental learningImageNet-R Inc5 (test)
Average Accuracy72.52
13
Class-incremental learningObjectNet Inc10 (test)
Average Accuracy71
6
Class-incremental learningVTAB Inc10 (test)
Average Accuracy89.42
6
Domain-incremental learningOfficeHome Inc65
Average Accuracy75.96
6
Showing 10 of 22 rows

Other info

Follow for update