Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks
About
Model inversion attacks (MIAs) aim to create synthetic images that reflect the class-wise characteristics from a target classifier's private training data by exploiting the model's learned knowledge. Previous research has developed generative MIAs that use generative adversarial networks (GANs) as image priors tailored to a specific target model. This makes the attacks time- and resource-consuming, inflexible, and susceptible to distributional shifts between datasets. To overcome these drawbacks, we present Plug & Play Attacks, which relax the dependency between the target model and image prior, and enable the use of a single GAN to attack a wide range of targets, requiring only minor adjustments to the attack. Moreover, we show that powerful MIAs are possible even with publicly available pre-trained GANs and under strong distributional shifts, for which previous approaches fail to produce meaningful results. Our extensive evaluation confirms the improved robustness and flexibility of Plug & Play Attacks and their ability to create high-quality images revealing sensitive class characteristics.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Model Inversion Attack | VggFace2 | Top-1 Acc93.88 | 35 | |
| Model Inversion Attack | VGGFace | Top-1 Accuracy97.96 | 11 | |
| Defense against Model Inversion Attack | CelebA high-quality (test) | Accuracy (Acc)87.35 | 10 | |
| Model Inversion Attack | Model Inversion Attack Query Complexity Statistics | Query Steps (Step 1)5.00e+3 | 5 |