Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

FedSRD: Sparsify-Reconstruct-Decompose for Communication-Efficient Federated Large Language Models Fine-Tuning

About

The current paradigm of training large language models (LLMs) on public available Web data is becoming unsustainable as high-quality data sources in specialized domains near exhaustion. Federated Learning (FL) emerges as a practical solution for the next generation of AI on a decentralized Web, enabling privacy-preserving collaborative fine-tuning on decentralized private data. While Low-Rank Adaptation (LoRA) is standard for efficient fine-tuning, its federated application faces a critical bottleneck: communication overhead under heterogeneous network conditions. Structural redundancy in LoRA parameters increases communication costs and causes aggregation conflicts. To address this, we propose FedSRD, a Sparsify-Reconstruct-Decompose framework for communication-efficient federated LLM fine-tuning. We introduce importance-aware sparsification to reduce the upload parameter count while preserving the structural integrity of LoRA updates. The server aggregates updates in full-rank space to mitigate conflicts, then decomposes the global update into a sparse low-rank format for broadcast, ensuring a symmetrically efficient cycle. We also propose an efficient variant, FedSRD-e, to reduce computational overhead. Experiments on 10 benchmarks show our framework significantly reduces communication costs by up to 90\% while improving performance on heterogeneous client data.

Guochen Yan, Luyuan Xie, Qingni Shen, Yuejian Fang, Zhonghai Wu• 2025

Related benchmarks

TaskDatasetResultRank
Federated Domain-Specific Fine-tuningHumanEval, MBPP, MedQA, MedMCQA, FinEval, FinanceIQ, GSM8K, MATH In-Domain (test)
Average In-Domain Performance61.19
16
Out-of-domain GeneralizationAGIEval Out-of-Domain Law (test)
Average OOD Accuracy42.59
16
Federated Fine-tuningFederated Fine-tuning Simulation Environment
Time per Round (min)2.1
16
Communication Cost AnalysisLlama 3.2 3B
Upload Size (MB)31
8
Showing 4 of 4 rows

Other info

Follow for update