Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Task-agnostic Low-rank Residual Adaptation for Efficient Federated Continual Fine-Tuning

About

Federated Parameter-Efficient Fine-Tuning (Fed-PEFT) enables lightweight adaptation of large pre-trained models in federated learning settings by updating only a small subset of parameters. However, Fed-PEFT methods typically assume a fixed label space and static downstream tasks, which is restrictive in realistic application scenarios where clients continuously encounter new classes over time. This leads to an emerging problem, known as \emph{Federated Continual Fine-Tuning} (FCFT). In FCFT, clients collaboratively fine-tune a pre-trained model over a sequence of tasks, where each client observes disjoint sets of new classes over time, and task identity is unavailable at inference time. FCFT is challenging because it simultaneously suffers from severe forgetting under non-IID client data distributions, parameter growth and task-specific inference caused by task-wise modules, and aggregation inconsistency across heterogeneous clients. To address these challenges, we propose Federated Task-agnostic Low-rank Residual Adaptation (Fed-TaLoRA), a novel approach for efficient FCFT built on task-agnostic adaptation, post-aggregation model calibration, and strategic low-rank adaptation placement. Fed-TaLoRA continuously fine-tunes a single shared module across sequential tasks to avoid task-wise parameter growth, and further introduces a theoretically grounded residual weight update mechanism to calibrate the aggregated global model and improve aggregation fidelity. We provide a theoretical analysis of the convergence and aggregation behavior of Fed-TaLoRA. Extensive experiments on four benchmark datasets demonstrate that Fed-TaLoRA consistently outperforms strong baselines while reducing communication and computation costs significantly.

Feng Yu, Jia Hu, Geyong Min• 2025

Related benchmarks

TaskDatasetResultRank
Federated Class-Incremental LearningTiny-ImageNet 10 tasks (20 classes per task) (test)
FAA78
54
Federated Class-Incremental LearningCIFAR-100 Quantity-based label imbalance
FAA76.6
42
Federated Class-Incremental LearningCIFAR-100 Distribution-based label imbalance
FAA77.4
39
Federated Class-Incremental LearningCIFAR-100 alpha = 6
FAA76.6
5
Image ClassificationImageNet Quantity-based label imbalance 10 tasks (100 classes per task) alpha=60
FAA73.5
4
Image ClassificationImageNet Distribution-based label imbalance 10 tasks (100 classes per task) beta = 0.5
FAA74.8
4
Showing 6 of 6 rows

Other info

Follow for update