Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Practical Defences Against Model Inversion Attacks for Split Neural Networks

About

We describe a threat model under which a split network-based federated learning system is susceptible to a model inversion attack by a malicious computational server. We demonstrate that the attack can be successfully performed with limited knowledge of the data distribution by the attacker. We propose a simple additive noise method to defend against model inversion, finding that the method can significantly reduce attack efficacy at an acceptable accuracy trade-off on MNIST. Furthermore, we show that NoPeekNN, an existing defensive method, protects different information from exposure, suggesting that a combined defence is necessary to fully protect private user data.

Tom Titcombe, Adam J. Hall, Pavlos Papadopoulos, Daniele Romanini• 2021

Related benchmarks

TaskDatasetResultRank
Model Inversion DefenseCIFAR-100 (test)
Accuracy65.4
19
Showing 1 of 1 rows

Other info

Follow for update