Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

DP-DyLoRA: Fine-Tuning Transformer-Based Models On-Device under Differentially Private Federated Learning using Dynamic Low-Rank Adaptation

About

Federated learning (FL) allows clients to collaboratively train a global model without sharing their local data with a server. However, clients' contributions to the server can still leak sensitive information. Differential privacy (DP) addresses such leakage by providing formal privacy guarantees, with mechanisms that add randomness to the clients' contributions. The randomness makes it infeasible to train large transformer-based models, common in modern federated learning systems. In this work, we empirically evaluate the practicality of fine-tuning large scale on-device transformer-based models with differential privacy in a federated learning system. We conduct comprehensive experiments on various system properties for tasks spanning a multitude of domains: speech recognition, computer vision (CV) and natural language understanding (NLU). Our results show that full fine-tuning under differentially private federated learning (DP-FL) generally leads to huge performance degradation which can be alleviated by reducing the dimensionality of contributions through parameter-efficient fine-tuning (PEFT). Our benchmarks of existing DP-PEFT methods show that DP-Low-Rank Adaptation (DP-LoRA) consistently outperforms other methods. An even more promising approach, DyLoRA, which makes the low rank variable, when naively combined with FL would straightforwardly break differential privacy. We therefore propose an adaptation method that can be combined with differential privacy and call it DP-DyLoRA. Finally, we are able to reduce the accuracy degradation and word error rate (WER) increase due to DP to less than 2% and 7% respectively with 1 million clients and a stringent privacy budget of $\epsilon=2$.

Jie Xu, Karthikeyan Saravanan, Rogier van Dalen, Haaris Mehmood, David Tuckey, Mete Ozay• 2024

Related benchmarks

TaskDatasetResultRank
Code GenerationHumanEval
Pass@161
850
Code GenerationHumanEval+
Pass@155.5
189
Code GenerationMBPP+
Pass@154.9
122
Code GenerationMBPP
Pass@163.9
113
CodeMBPP
Pass@169.6
43
Code GenerationBigCodeBench-Instruct Hard
Pass@19.5
37
Code GenerationBigCodeBench-Instruct (Full)
Pass@10.279
37
Code CompletionHumanEval+
Pass@141.3
33
Code CompletionMBPP+
Pass@156.1
33
Code CompletionHumanEval
Pass@10.478
20
Showing 10 of 12 rows

Other info

Follow for update