Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

TAG: Gradient Attack on Transformer-based Language Models

About

Although federated learning has increasingly gained attention in terms of effectively utilizing local devices for data privacy enhancement, recent studies show that publicly shared gradients in the training process can reveal the private training images (gradient leakage) to a third-party in computer vision. We have, however, no systematic understanding of the gradient leakage mechanism on the Transformer based language models. In this paper, as the first attempt, we formulate the gradient attack problem on the Transformer-based language models and propose a gradient attack algorithm, TAG, to reconstruct the local training data. We develop a set of metrics to evaluate the effectiveness of the proposed attack algorithm quantitatively. Experimental results on Transformer, TinyBERT$_{4}$, TinyBERT$_{6}$, BERT$_{BASE}$, and BERT$_{LARGE}$ using GLUE benchmark show that TAG works well on more weight distributions in reconstructing training data and achieves 1.5$\times$ recover rate and 2.5$\times$ ROUGE-2 over prior methods without the need of ground truth label. TAG can obtain up to 90$\%$ data by attacking gradients in CoLA dataset. In addition, TAG has a stronger adversary on large models, small dictionary size, and small input length. We hope the proposed TAG will shed some light on the privacy leakage problem in Transformer-based NLP models.

Jieren Deng, Yijue Wang, Ji Li, Chao Shang, Hang Liu, Sanguthevar Rajasekaran, Caiwen Ding• 2021

Related benchmarks

TaskDatasetResultRank
Text reconstruction from gradientsRotten Tomatoes
ROUGE-173.6
36
Text reconstruction from gradientsCOLA
ROUGE-182.9
24
Text reconstruction from gradientsSST-2
ROUGE-180.8
24
Gradient InversionYelp (val)
Accuracy0.4167
7
Showing 4 of 4 rows

Other info

Follow for update