Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

DiffProtect: Generate Adversarial Examples with Diffusion Models for Facial Privacy Protection

About

The increasingly pervasive facial recognition (FR) systems raise serious concerns about personal privacy, especially for billions of users who have publicly shared their photos on social media. Several attempts have been made to protect individuals from being identified by unauthorized FR systems utilizing adversarial attacks to generate encrypted face images. However, existing methods suffer from poor visual quality or low attack success rates, which limit their utility. Recently, diffusion models have achieved tremendous success in image generation. In this work, we ask: can diffusion models be used to generate adversarial examples to improve both visual quality and attack performance? We propose DiffProtect, which utilizes a diffusion autoencoder to generate semantically meaningful perturbations on FR systems. Extensive experiments demonstrate that DiffProtect produces more natural-looking encrypted images than state-of-the-art methods while achieving significantly higher attack success rates, e.g., 24.5% and 25.1% absolute improvements on the CelebA-HQ and FFHQ datasets.

Jiang Liu, Chun Pong Lau, Zhongliang Guo, Yuxiang Guo, Zhaoyang Wang, Rama Chellappa• 2023

Related benchmarks

TaskDatasetResultRank
Face VerificationFFHQ
ASR (IR152)57.62
42
Black-box AttackCelebA-HQ
IRSE50 Score79.34
32
Face VerificationCelebA-HQ
ASR (IR152)0.5864
19
Showing 3 of 3 rows

Other info

Follow for update