Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Robust Federated Finetuning of Foundation Models via Alternating Minimization of LoRA

About

Parameter-Efficient Fine-Tuning (PEFT) has risen as an innovative training strategy that updates only a select few model parameters, significantly lowering both computational and memory demands. PEFT also helps to decrease data transfer in federated learning settings, where communication depends on the size of updates. In this work, we explore the constraints of previous studies that integrate a well-known PEFT method named LoRA with federated fine-tuning, then introduce RoLoRA, a robust federated fine-tuning framework that utilizes an alternating minimization approach for LoRA, providing greater robustness against decreasing fine-tuning parameters and increasing data heterogeneity. Our results indicate that RoLoRA not only presents the communication benefits but also substantially enhances the robustness and effectiveness in multiple federated fine-tuning scenarios.

Shuangyi Chen, Yue Ju, Hardik Dalal, Zhongwen Zhu, Ashish Khisti• 2024

Related benchmarks

TaskDatasetResultRank
Image ClassificationTiny ImageNet (test)
Accuracy50.87
265
Natural Language UnderstandingGLUE (val)
SST-295.6
170
Natural Language UnderstandingGLUE (test)
SST-2 Accuracy93.65
33
Image ClassificationCIFAR-100 (test)
Accuracy67.96
24
Federated Cross-Domain RecommendationGoodReads Children
H@54.94
14
Federated Cross-Domain RecommendationGoodReads Crime
H@53.22
14
Federated Cross-Domain RecommendationGoodReads Comics
Hit Rate @58.2
14
Federated Cross-Domain RecommendationAmazon Clothing (test)
Hits@50.83
10
Federated Cross-Domain RecommendationGoodReads Crime, Comics & Children Average
Avg H@55.45
10
Federated Cross-Domain RecommendationAmazon Beauty (test)
H@51.65
10
Showing 10 of 13 rows

Other info

Follow for update