FedTreeLoRA: Reconciling Statistical and Functional Heterogeneity in Federated LoRA Fine-Tuning
About
Federated Learning (FL) with Low-Rank Adaptation (LoRA) has become a standard for privacy-preserving LLM fine-tuning. However, existing personalized methods predominantly operated under a restrictive Flat-Model Assumption: they addressed client-side \textit{statistical heterogeneity} but treated the model as a monolithic block, ignoring the \textit{functional heterogeneity} across LLM layers. We argue that these two statistical (horizontal) and functional (vertical) dimensions, are \textit{orthogonal in source yet coupled in interaction}, implying that the optimal depth of parameter sharing is functionally dependent on client similarity. To address this, we propose \textbf{FedTreeLoRA}, a framework employing tree-structured aggregation for fine-grained, layer-wise alignment. By dynamically constructing an aggregation hierarchy, FedTreeLoRA allows clients to share broad consensus on shallow `trunks' while progressively specializing on deep `branches'. Experiments on NLU and NLG benchmarks demonstrate that FedTreeLoRA significantly outperforms state-of-the-art methods by effectively reconciling generalization and personalization.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Natural language generation | Text Edit | ROUGE-188.84 | 8 | |
| Natural language generation | Struct2Text | ROUGE-155.2 | 8 | |
| Natural language generation | Sentiment | ROUGE-152.85 | 8 | |
| Natural language generation | Reasoning | ROUGE-174.23 | 8 | |
| Natural Language Understanding | GLUE | MNLI Accuracy88.15 | 8 | |
| Natural Language Understanding | GLUE | MNLI Accuracy82.94 | 7 |