Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Be Careful What You Smooth For: Label Smoothing Can Be a Privacy Shield but Also a Catalyst for Model Inversion Attacks

About

Label smoothing -- using softened labels instead of hard ones -- is a widely adopted regularization method for deep learning, showing diverse benefits such as enhanced generalization and calibration. Its implications for preserving model privacy, however, have remained unexplored. To fill this gap, we investigate the impact of label smoothing on model inversion attacks (MIAs), which aim to generate class-representative samples by exploiting the knowledge encoded in a classifier, thereby inferring sensitive information about its training data. Through extensive analyses, we uncover that traditional label smoothing fosters MIAs, thereby increasing a model's privacy leakage. Even more, we reveal that smoothing with negative factors counters this trend, impeding the extraction of class-related information and leading to privacy preservation, beating state-of-the-art defenses. This establishes a practical and powerful novel way for enhancing model resilience against MIAs.

Lukas Struppek, Dominik Hintersdorf, Kristian Kersting• 2023

Related benchmarks

TaskDatasetResultRank
Model Inversion DefenseCelebA
Accuracy83.93
64
Model Inversion DefenseFace.evoLVe
Accuracy84.68
25
Model Inversion DefenseFFHQ
Accuracy81.76
12
Model Inversion DefenseCelebA (test)
Accuracy81.99
10
Defense against Model Inversion AttackCelebA high-quality (test)
Accuracy (Acc)83.59
10
Defense against Model Inversion AttackCelebA
Accuracy81.76
5
Model Inversion Defense (KED-MI Attack)VGG-16
Accuracy81.79
2
Model Inversion Defense (LOMMA-GMI Attack)VGG-16
Accuracy81.79
2
Model Inversion Defense (PLG-MI Attack)VGG-16 Models
Accuracy81.79
2
Showing 9 of 9 rows

Other info

Follow for update