Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Continual Fine-Tuning with Provably Accurate and Parameter-Free Task Retrieval

About

Continual fine-tuning aims to adapt a pre-trained backbone to new tasks sequentially while preserving performance on earlier tasks whose data are no longer available. Existing approaches fall into two categories which include input- and parameter-adaptation. Input-adaptation methods rely on retrieving the most relevant prompts at test time, but require continuously learning a retrieval function that is prone to forgetting. Parameter-adaptation methods instead use a fixed input embedding function to enable retrieval-free prediction and avoid forgetting, but sacrifice representation adaptability. To combine their best strengths, we propose a new parameter-adaptation method that enables adaptive use of input embeddings during test time with parameter-free retrieval. We derive task-retrieval error bounds for a clustering-based, parameter-free paradigm, providing theoretical guarantees that link low retrieval error to structural properties of task-specific representation clusters, revealing a fresh insight into how well-organized clustering structure will enable reliable retrieval. Motivated by this insight, our method is designed with two key components: (i) an adaptive module composition strategy that learns informative task-specific updates to preserve and complement prior knowledge, and (ii) a clustering-based retrieval mechanism that captures distinct representation signatures for each task, enabling adaptive representation use at test time. Extensive experiments show that these components work synergistically to improve retrieval and predictive performance under large shifts in task semantics.

Hang Thi-Thuy Le, Long Minh Bui, Minh Hoang, Trong Nghia Hoang• 2026

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet-R
Accuracy82.17
217
Image ClassificationCIFAR-100 Task A 50 classes
Accuracy91.31
16
Class-incremental learningCIFAR-100 Uniformly Mild scenario
Average Accuracy94.1
10
Class-incremental learningImageNet-R Uniformly Mild scenario
Average Accuracy82.17
10
Class-incremental learningImageNet-A Uniformly Abrupt scenario
Average Accuracy63.19
10
Class-incremental learningVTAB5T small Uniformly Abrupt scenario
Average Accuracy94.24
10
Class-incremental learningVTAB5T-large Uniformly Abrupt scenario
Average Accuracy89.37
9
Class-incremental learningVTAB-Sim50 Varying scenario
Average Accuracy95.89
9
Class-incremental learningCIFAR-100 Uniformly Mild
Average Forgetting2.02
6
Class-incremental learningVTAB5T small Uniformly Abrupt
Average Forgetting4.12
6
Showing 10 of 17 rows

Other info

Follow for update