Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Trap-MID: Trapdoor-based Defense against Model Inversion Attacks

About

Model Inversion (MI) attacks pose a significant threat to the privacy of Deep Neural Networks by recovering training data distribution from well-trained models. While existing defenses often rely on regularization techniques to reduce information leakage, they remain vulnerable to recent attacks. In this paper, we propose the Trapdoor-based Model Inversion Defense (Trap-MID) to mislead MI attacks. A trapdoor is integrated into the model to predict a specific label when the input is injected with the corresponding trigger. Consequently, this trapdoor information serves as the "shortcut" for MI attacks, leading them to extract trapdoor triggers rather than private data. We provide theoretical insights into the impacts of trapdoor's effectiveness and naturalness on deceiving MI attacks. In addition, empirical experiments demonstrate the state-of-the-art defense performance of Trap-MID against various MI attacks without the requirements for extra data or large computational overhead. Our source code is publicly available at https://github.com/ntuaislab/Trap-MID.

Zhen-Ting Liu, Shang-Tse Chen• 2024

Related benchmarks

TaskDatasetResultRank
Model Inversion DefenseCelebA
Accuracy87.23
64
Model Inversion DefenseFace.evoLVe
Accuracy86.04
25
Model Inversion DefenseFFHQ
Accuracy81.62
12
Defense against Model Inversion AttackCelebA high-quality (test)
Accuracy (Acc)89.98
10
Model Inversion DefenseCelebA (test)
Accuracy81.37
10
Defense against Model Inversion AttackCelebA
Accuracy81.62
5
Model Inversion Defense (GMI Attack)VGG-16
Accuracy79.39
2
Model Inversion Defense (LOMMA-KED-MI Attack)VGG-16 Models (test)
Accuracy81.55
2
Model Inversion Defense (KED-MI Attack)VGG-16
Accuracy81.55
2
Model Inversion Defense (LOMMA-GMI Attack)VGG-16
Accuracy81.55
2
Showing 10 of 11 rows

Other info

Code

Follow for update