Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Multi-head Uncertainty Inference for Adversarial Attack Detection

About

Deep neural networks (DNNs) are sensitive and susceptible to tiny perturbation by adversarial attacks which causes erroneous predictions. Various methods, including adversarial defense and uncertainty inference (UI), have been developed in recent years to overcome the adversarial attacks. In this paper, we propose a multi-head uncertainty inference (MH-UI) framework for detecting adversarial attack examples. We adopt a multi-head architecture with multiple prediction heads (i.e., classifiers) to obtain predictions from different depths in the DNNs and introduce shallow information for the UI. Using independent heads at different depths, the normalized predictions are assumed to follow the same Dirichlet distribution, and we estimate distribution parameter of it by moment matching. Cognitive uncertainty brought by the adversarial attacks will be reflected and amplified on the distribution. Experimental results show that the proposed MH-UI framework can outperform all the referred UI methods in the adversarial attack detection task with different settings.

Yuqi Yang, Songyun Yang, Jiyang Xie. Zhongwei Si, Kai Guo, Ke Zhang, Kongming Liang• 2022

Related benchmarks

TaskDatasetResultRank
Adversarial DetectionCIFAR-10
FGSM Acc79.2
11
Adversarial DetectionMS-COCO (test)
FGSM Detection Rate76
11
Showing 2 of 2 rows

Other info

Follow for update