Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Adversarial Attack for Uncertainty Estimation: Identifying Critical Regions in Neural Networks

About

We propose a novel method to capture data points near decision boundary in neural network that are often referred to a specific type of uncertainty. In our approach, we sought to perform uncertainty estimation based on the idea of adversarial attack method. In this paper, uncertainty estimates are derived from the input perturbations, unlike previous studies that provide perturbations on the model's parameters as in Bayesian approach. We are able to produce uncertainty with couple of perturbations on the inputs. Interestingly, we apply the proposed method to datasets derived from blockchain. We compare the performance of model uncertainty with the most recent uncertainty methods. We show that the proposed method has revealed a significant outperformance over other methods and provided less risk to capture model uncertainty in machine learning.

Ismail Alarab, Simant Prakoonwit• 2021

Related benchmarks

TaskDatasetResultRank
Out-of-Distribution DetectionCIFAR-10 vs SVHN (test)
AUROC0.8756
101
Out-of-Distribution DetectionCIFAR-100 SVHN in-distribution out-of-distribution (test)
AUROC54.74
90
Out-of-Distribution DetectionCIFAR-10 in-distribution LSUN out-of-distribution (test)
AUROC82.96
73
Out-of-Distribution DetectionCIFAR-100 (in-distribution) / LSUN (out-of-distribution) (test)
AUROC31.61
67
Out-of-Distribution DetectionSVHN CIFAR-10 in-distribution out-of-distribution (test)
AUROC71.53
56
Out-of-Distribution DetectionMNIST (In-distribution) vs Fashion-MNIST (OOD) (test)
AUPR0.9524
36
Out-of-Distribution DetectionSVHN → CIFAR-100 (test)
AUROC72.75
22
Active LearningMNIST (test)
Accuracy71.69
12
Active LearningSVHN (test)
Accuracy64.56
12
Active LearningCIFAR-10 (test)
Accuracy38.57
12
Showing 10 of 12 rows

Other info

Follow for update