Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Robbing the Fed: Directly Obtaining Private Data in Federated Learning with Modified Models

About

Federated learning has quickly gained popularity with its promises of increased user privacy and efficiency. Previous works have shown that federated gradient updates contain information that can be used to approximately recover user data in some situations. These previous attacks on user privacy have been limited in scope and do not scale to gradient updates aggregated over even a handful of data points, leaving some to conclude that data privacy is still intact for realistic training regimes. In this work, we introduce a new threat model based on minimal but malicious modifications of the shared model architecture which enable the server to directly obtain a verbatim copy of user data from gradient updates without solving difficult inverse problems. Even user data aggregated over large batches -- where previous methods fail to extract meaningful content -- can be reconstructed by these minimally modified models.

Liam Fowl, Jonas Geiping, Wojtek Czaja, Micah Goldblum, Tom Goldstein• 2021

Related benchmarks

TaskDatasetResultRank
Gradient Inversion AttackCIFAR-10
PSNR17.35
35
Gradient Inversion AttackMNIST
PSNR17.53
20
Gradient Inversion AttackImageNet
PSNR16.58
17
Gradient Inversion AttackLung-Colon
PSNR17.1
14
Gradient Inversion AttackHAM10000
PSNR15.32
14
Linear Layer LeakageMNIST 28x28x1 (train)
Model Size Overhead (MB)153.2
6
Linear Layer LeakageCIFAR-100 32x32x3 (train)
Model Size Overhead (MB)600.1
6
Linear Layer LeakageTiny ImageNet 64x64x3 (train)
Model size overhead (MB)2.40e+3
6
Linear Layer LeakageImageNet 256x256x3 (train)
Model Size Overhead (MB)3.84e+4
6
Gradient Leakage AttackTiny-ImageNet
Robbing the Fed Overhead1.18e+3
5
Showing 10 of 10 rows

Other info

Follow for update