Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

FedTreeLoRA: Reconciling Statistical and Functional Heterogeneity in Federated LoRA Fine-Tuning

About

Federated Learning (FL) with Low-Rank Adaptation (LoRA) has become a standard for privacy-preserving LLM fine-tuning. However, existing personalized methods predominantly operated under a restrictive Flat-Model Assumption: they addressed client-side \textit{statistical heterogeneity} but treated the model as a monolithic block, ignoring the \textit{functional heterogeneity} across LLM layers. We argue that these two statistical (horizontal) and functional (vertical) dimensions, are \textit{orthogonal in source yet coupled in interaction}, implying that the optimal depth of parameter sharing is functionally dependent on client similarity. To address this, we propose \textbf{FedTreeLoRA}, a framework employing tree-structured aggregation for fine-grained, layer-wise alignment. By dynamically constructing an aggregation hierarchy, FedTreeLoRA allows clients to share broad consensus on shallow `trunks' while progressively specializing on deep `branches'. Experiments on NLU and NLG benchmarks demonstrate that FedTreeLoRA significantly outperforms state-of-the-art methods by effectively reconciling generalization and personalization.

Jieming Bian, Lei Wang, Letian Zhang, Jie Xu• 2026

Related benchmarks

TaskDatasetResultRank
Natural language generationText Edit
ROUGE-188.84
8
Natural language generationStruct2Text
ROUGE-155.2
8
Natural language generationSentiment
ROUGE-152.85
8
Natural language generationReasoning
ROUGE-174.23
8
Natural Language UnderstandingGLUE
MNLI Accuracy88.15
8
Natural Language UnderstandingGLUE
MNLI Accuracy82.94
7
Showing 6 of 6 rows

Other info

Follow for update