Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

FedEx-LoRA: Exact Aggregation for Federated and Efficient Fine-Tuning of Foundation Models

About

Low-Rank Adaptation (LoRA) is a popular technique for efficient fine-tuning of foundation models. However, applying LoRA in federated learning environments, where data is distributed across multiple clients, presents unique challenges. Existing methods rely on traditional federated averaging of LoRA adapters, resulting in inexact updates. To address this, we propose Federated Exact LoRA, or FedEx-LoRA, which adds a residual error term to the pretrained frozen weight matrix. Our approach achieves exact updates with minimal computational and communication overhead, preserving LoRA's efficiency. We evaluate the method on various models across arithmetic reasoning, commonsense reasoning, natural language understanding and natural language generation tasks, showing consistent performance gains over state-of-the-art methods across multiple settings. Through extensive analysis, we quantify that the deviations in updates from the ideal solution are significant, highlighting the need for exact aggregation. Our method's simplicity, efficiency, and broad applicability position it as a promising solution for accurate and effective federated fine-tuning of foundation models. Our code is publicly available at https://github.com/RaghavSinghal10/fedex-lora.

Raghav Singhal, Kaustubh Ponkshe, Praneeth Vepakomma• 2024

Related benchmarks

TaskDatasetResultRank
Question AnsweringSQuAD v1.1
F191.82
79
Natural Language InferenceGLUE (test)
MNLI Acc92.74
18
Showing 2 of 2 rows

Other info

Follow for update