Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Provable defenses against adversarial examples via the convex outer adversarial polytope

About

We propose a method to learn deep ReLU-based classifiers that are provably robust against norm-bounded adversarial perturbations on the training data. For previously unseen examples, the approach is guaranteed to detect all adversarial examples, though it may flag some non-adversarial examples as well. The basic idea is to consider a convex outer approximation of the set of activations reachable through a norm-bounded perturbation, and we develop a robust optimization procedure that minimizes the worst case loss over this outer region (via a linear program). Crucially, we show that the dual problem to this linear program can be represented itself as a deep network similar to the backpropagation network, leading to very efficient optimization approaches that produce guaranteed bounds on the robust loss. The end result is that by executing a few more forward and backward passes through a slightly modified version of the original network (though possibly with much larger batch sizes), we can learn a classifier that is provably robust to any norm-bounded adversarial attack. We illustrate the approach on a number of tasks to train classifiers with robust adversarial guarantees (e.g. for MNIST, we produce a convolutional classifier that provably has less than 5.8% test error for any adversarial attack with bounded $\ell_\infty$ norm less than $\epsilon = 0.1$), and code for all experiments in the paper is available at https://github.com/locuslab/convex_adversarial.

Eric Wong, J. Zico Kolter• 2017

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-10 ε = 36/255 (test)
Clean Accuracy60.1
22
Formal VerificationMNIST FFNet first 1000 images (val)
Relative Verification Bound-5.9
13
Neural Network VerificationMNIST Deep
Time4.8
13
Neural Network VerificationMNIST Wide
Execution Time5.5
13
Formal VerificationMNIST Deep first 1000 images (val)
Relative Verification Bound-8.2
13
Image ClassificationMNIST ε = 1.58 (test)
Clean Accuracy88.1
8
Formal VerificationMNIST Wide first 1000 images (val)
Relative Verification Bound-4.7
7
Neural Network VerificationMNIST FFNet (val)
Execution Time (s)90
7
Neural Network VerificationMNIST Deep (val)
Time (s)2.13e+3
7
Neural Network VerificationMNIST Wide (val)
Time (s)387.1
7
Showing 10 of 14 rows

Other info

Follow for update