Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

DP-Forward: Fine-tuning and Inference on Language Models with Differential Privacy in Forward Pass

About

Differentially private stochastic gradient descent (DP-SGD) adds noise to gradients in back-propagation, safeguarding training data from privacy leakage, particularly membership inference. It fails to cover (inference-time) threats like embedding inversion and sensitive attribute inference. It is also costly in storage and computation when used to fine-tune large pre-trained language models (LMs). We propose DP-Forward, which directly perturbs embedding matrices in the forward pass of LMs. It satisfies stringent local DP requirements for training and inference data. To instantiate it using the smallest matrix-valued noise, we devise an analytic matrix Gaussian~mechanism (aMGM) by drawing possibly non-i.i.d. noise from a matrix Gaussian distribution. We then investigate perturbing outputs from different hidden (sub-)layers of LMs with aMGM noises. Its utility on three typical tasks almost hits the non-private baseline and outperforms DP-SGD by up to 7.7pp at a moderate privacy level. It saves 3$\times$ time and memory costs compared to DP-SGD with the latest high-speed library. It also reduces the average success rates of embedding inversion and sensitive attribute inference by up to 88pp and 41pp, respectively, whereas DP-SGD fails.

Minxin Du, Xiang Yue, Sherman S. M. Chow, Tianhao Wang, Chenyu Huang, Huan Sun• 2023

Related benchmarks

TaskDatasetResultRank
Text ClassificationSST2
Accuracy93.6
71
Privacy ProtectionTask Inputs (SST2, MMLU, PIQA, IFEval)
TTRSR (%)77.08
11
Instruction Fine-tuningAI Research Instructions and Outputs
Accuracy (%)85
5
Showing 3 of 3 rows

Other info

Follow for update