Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SHE-LoRA: Selective Homomorphic Encryption for Federated Tuning with Heterogeneous LoRA

About

Federated fine-tuning is critical for improving the performance of large language models (LLMs) in handling domain-specific tasks while keeping training data decentralized and private. However, prior work has shown that clients' private data can actually be recovered via gradient inversion attacks. Existing privacy preservation techniques against such attacks typically entail performance degradation and high costs, making them ill-suited for clients with heterogeneous data distributions and device capabilities. In this paper, we propose SHE-LoRA, which integrates selective homomorphic encryption (SHE) and low-rank adaptation (LoRA) to enable efficient and privacy-preserving federated tuning of LLMs in cross-device environments. Based on model parameter sensitivity assessment, heterogeneous clients adaptively negotiate and select a subset of model parameters for homomorphic encryption. To ensure accurate model aggregation, we design a column-aware secure aggregation method and customized reparameterization techniques to align the aggregation results with the heterogeneous device capabilities of clients. Extensive experiments demonstrate that SHE-LoRA maintains performance comparable to non-private baselines, achieves strong resistance to state-of-the-art attacks, and significantly reduces communication overhead by 99.71% and encryption time by 99.87%, compared to HE baselines.

Jianmin Liu, Li Yan, Borui Li, Lei Yu, Chao Shen• 2025

Related benchmarks

TaskDatasetResultRank
Instruction FollowingIFEval
Accuracy (0-100)67.34
292
Mathematical Problem SolvingMATH
Accuracy78.86
166
Question AnsweringMMLU-Pro
Accuracy47.36
56
Scientific ReasoningGPQA
Accuracy43.94
50
Complex ReasoningBBH
Accuracy63.26
40
Text reconstruction from gradientsRotten Tomatoes
ROUGE-10.00e+0
36
Data ReconstructionSST2
ROUGE-10.98
12
Image ClassificationMNIST, DTD, EuroSAT, GTSRB, SVHN (test)
Accuracy (MNIST)99.33
10
Membership Inference AttackPILE (train)
Loss8.2
7
Membership InferencePile
Loss (AUROC)52.4
7
Showing 10 of 14 rows

Other info

Follow for update