Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Heterogeneous LoRA for Federated Fine-tuning of On-Device Foundation Models

About

Foundation models (FMs) adapt well to specific domains or tasks with fine-tuning, and federated learning (FL) enables the potential for privacy-preserving fine-tuning of the FMs with on-device local data. For federated fine-tuning of FMs, we consider the FMs with small to medium parameter sizes of single digit billion at maximum, referred to as on-device FMs (ODFMs) that can be deployed on devices for inference but can only be fine-tuned with parameter efficient methods. In our work, we tackle the data and system heterogeneity problem of federated fine-tuning of ODFMs by proposing a novel method using heterogeneous low-rank approximations (LoRAs), namely HetLoRA. First, we show that the naive approach of using homogeneous LoRA ranks across devices face a trade-off between overfitting and slow convergence, and thus propose HetLoRA, which allows heterogeneous ranks across client devices and efficiently aggregates and distributes these heterogeneous LoRA modules. By applying rank self-pruning locally and sparsity-weighted aggregation at the server, HetLoRA combines the advantages of high and low-rank LoRAs, which achieves improved convergence speed and final performance compared to homogeneous LoRA. Furthermore, HetLoRA offers enhanced computation efficiency compared to full fine-tuning, making it suitable for federated fine-tuning across heterogeneous devices.

Yae Jee Cho, Luyang Liu, Zheng Xu, Aldi Fahrezi, Gauri Joshi• 2024

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningCommonsense Reasoning (BoolQ, PIQA, SIQA, HellaS., WinoG., ARC-e, ARC-c, OBQA)
BoolQ Accuracy63.88
61
Text ClassificationBANKING77 Dir(0.01) (test)
Accuracy62.98
45
Cross-task generalizationSuper-NaturalInstructions English Track (unseen clients)
Weighted Avg Rouge-L61.53
27
Text Classification20 Newsgroups Dir(0.01) (test)
Accuracy0.3734
17
Text ClassificationBANKING77 Dir(0.5) (test)
Accuracy87.2
17
Text ClassificationBANKING77 Dir(0.1) (test)
Accuracy77.44
17
Text Classification20 Newsgroups Dir(0.5) (test)
Accuracy68.12
17
Text Classification20 Newsgroups Dir(0.1) (test)
Accuracy61.57
17
Multi-turn Conversation EvaluationMT-Bench
Wizard Score3.51
10
Image ClassificationMNIST, DTD, EuroSAT, GTSRB, SVHN (test)
Accuracy (MNIST)95.37
10
Showing 10 of 15 rows

Other info

Follow for update