Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

iDLG: Improved Deep Leakage from Gradients

About

It is widely believed that sharing gradients will not leak private training data in distributed learning systems such as Collaborative Learning and Federated Learning, etc. Recently, Zhu et al. presented an approach which shows the possibility to obtain private training data from the publicly shared gradients. In their Deep Leakage from Gradient (DLG) method, they synthesize the dummy data and corresponding labels with the supervision of shared gradients. However, DLG has difficulty in convergence and discovering the ground-truth labels consistently. In this paper, we find that sharing gradients definitely leaks the ground-truth labels. We propose a simple but reliable approach to extract accurate data from the gradients. Particularly, our approach can certainly extract the ground-truth labels as opposed to DLG, hence we name it Improved DLG (iDLG). Our approach is valid for any differentiable model trained with cross-entropy loss over one-hot labels. We mathematically illustrate how our method can extract ground-truth labels from the gradients and empirically demonstrate the advantages over DLG.

Bo Zhao, Konda Reddy Mopuri, Hakan Bilen• 2020

Related benchmarks

TaskDatasetResultRank
Adjacency Matrix ReconstructionGraph Data Instances
AUC76.04
45
Node Feature ReconstructionGraph Data Instances
MSE0.5
45
Gradient Inversion AttackCIFAR-10
PSNR9.51
35
Gradient Inversion AttackMNIST
PSNR9.39
20
Adjacency Matrix RecoveryMUTAG
AUC51.08
9
Graph data recovery from gradientsMUTAG
Node Feature MSE1.1063
9
Graph data recovery from gradientsENZYMES
Node Feature MSE1.5751
9
Graph data recovery from gradientsPROTEINS
Node Feature MSE1.4736
9
Node Feature RecoveryMUTAG
MSE1.0636
9
Graph data recovery from gradientsPTC-MR
Node Feature MSE1.0608
9
Showing 10 of 11 rows

Other info

Follow for update