Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Variational Model Inversion Attacks

About

Given the ubiquity of deep neural networks, it is important that these models do not reveal information about sensitive data that they have been trained on. In model inversion attacks, a malicious user attempts to recover the private dataset used to train a supervised neural network. A successful model inversion attack should generate realistic and diverse samples that accurately describe each of the classes in the private dataset. In this work, we provide a probabilistic interpretation of model inversion attacks, and formulate a variational objective that accounts for both diversity and accuracy. In order to optimize this variational objective, we choose a variational family defined in the code space of a deep generative model, trained on a public auxiliary dataset that shares some structural similarity with the target dataset. Empirically, our method substantially improves performance in terms of target attack accuracy, sample realism, and diversity on datasets of faces and chest X-ray images.

Kuan-Chieh Wang, Yan Fu, Ke Li, Ashish Khisti, Richard Zemel, Alireza Makhzani• 2022

Related benchmarks

TaskDatasetResultRank
Membership Inference AttackChestX-ray8
Accuracy69
6
Model Inversion AttackCelebA 64x64 (test)
Accuracy0.55
4
Model Inversion AttackCXR 128x128 (test)
Accuracy69
4
Model Inversion AttackMNIST
Accuracy0.95
4
Digit ClassificationMNIST standard (test)
Attack Accuracy94.6
4
Model InversionCelebA
Attack Accuracy59.96
4
Model Inversion AttackCelebA
Top-5 Attack Accuracy82.32
4
Model Inversion AttackMNIST original (test)
Accuracy0.95
3
Showing 8 of 8 rows

Other info

Code

Follow for update