Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Detecting Adversarial Data using Perturbation Forgery

About

As a defense strategy against adversarial attacks, adversarial detection aims to identify and filter out adversarial data from the data flow based on discrepancies in distribution and noise patterns between natural and adversarial data. Although previous detection methods achieve high performance in detecting gradient-based adversarial attacks, new attacks based on generative models with imbalanced and anisotropic noise patterns evade detection. Even worse, the significant inference time overhead and limited performance against unseen attacks make existing techniques impractical for real-world use. In this paper, we explore the proximity relationship among adversarial noise distributions and demonstrate the existence of an open covering for these distributions. By training on the open covering of adversarial noise distributions, a detector with strong generalization performance against various types of unseen attacks can be developed. Based on this insight, we heuristically propose Perturbation Forgery, which includes noise distribution perturbation, sparse mask generation, and pseudo-adversarial data production, to train an adversarial detector capable of detecting any unseen gradient-based, generative-based, and physical adversarial attacks. Comprehensive experiments conducted on multiple general and facial datasets, with a wide spectrum of attacks, validate the strong generalization of our method.

Qian Wang, Chen Li, Yuchen Luo, Hefei Ling, Shijuan Huang, Ruoxi Jia, Ning Yu• 2024

Related benchmarks

TaskDatasetResultRank
Generative-based adversarial attack detectionImageNet100
CDA0.9878
7
Adversarial Attack DetectionFace dataset TIPIM attack
AUROC0.9885
5
Adversarial Attack DetectionFace dataset Adv-Sticker attack
AUROC0.9999
5
Adversarial Attack DetectionFace dataset Adv-Glasses attack
AUROC99.99
5
Adversarial Attack DetectionFace dataset Adv-Mask attack
AUROC99.99
5
Adversarial Attack DetectionFace dataset Adv-Makeup attack
AUROC0.9762
5
Adversarial Attack DetectionFace dataset AMT-GAN attack
AUROC0.9216
5
Adversarial DetectionImageNet MIFGSM attack (test)
AUROC99.31
5
Adversarial Attack DetectionImageNet100
Robustness (BIM)99.11
5
Showing 9 of 9 rows

Other info

Code

Follow for update