Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

FedALT: Federated Fine-Tuning through Adaptive Local Training with Rest-of-World LoRA

About

Fine-tuning large language models (LLMs) in federated settings enables privacy-preserving adaptation but suffers from cross-client interference due to model aggregation. Existing federated LoRA fine-tuning methods, primarily based on FedAvg, struggle with data heterogeneity, leading to harmful cross-client interference and suboptimal personalization. In this work, we propose \textbf{FedALT}, a novel personalized federated LoRA fine-tuning algorithm that fundamentally departs from FedAvg. Instead of using an aggregated model to initialize local training, each client continues training its individual LoRA while incorporating shared knowledge through a separate Rest-of-World (RoW) LoRA component. To effectively balance local adaptation and global information, FedALT introduces an adaptive mixer that dynamically learns input-specific weightings between the individual and RoW LoRA components, drawing conceptual foundations from the Mixture-of-Experts (MoE) paradigm. Through extensive experiments on NLP benchmarks, we demonstrate that FedALT significantly outperforms state-of-the-art personalized federated LoRA fine-tuning methods, achieving superior local adaptation without sacrificing computational efficiency.

Jieming Bian, Lei Wang, Letian Zhang, Jie Xu• 2025

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningCommonsense Reasoning Suite (test)
Avg Accuracy0.7148
22
Natural Language ProcessingFLAN 8-task subset: arc_challenge, cosmos_qa, definite_pronoun_resolution, glue_qqp, hellaswag, mnli, squad_v1, sst2
Closed-book QA68.07
7
Showing 2 of 2 rows

Other info

Follow for update