Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Adversarial Attacks on Neural Network Policies

About

Machine learning classifiers are known to be vulnerable to inputs maliciously constructed by adversaries to force misclassification. Such adversarial examples have been extensively studied in the context of computer vision applications. In this work, we show adversarial attacks are also effective when targeting neural network policies in reinforcement learning. Specifically, we show existing adversarial example crafting techniques can be used to significantly degrade test-time performance of trained policies. Our threat model considers adversaries capable of introducing small perturbations to the raw input of the policy. We characterize the degree of vulnerability across tasks and training algorithms, for a subclass of adversarial-example attacks in white-box and black-box settings. Regardless of the learned task or training algorithm, we observe a significant drop in performance, even with small adversarial perturbations that do not interfere with human perception. Videos are available at http://rll.berkeley.edu/adversarial.

Sandy Huang, Nicolas Papernot, Ian Goodfellow, Yan Duan, Pieter Abbeel• 2017

Related benchmarks

TaskDatasetResultRank
Adversarial AttackPong
Cumulative Reward20.26
80
Adversarial AttackSeaquest
Cumulative Reward290.3
80
Cumulative RewardQbert
Cumulative Reward210.3
80
Cumulative RewardSpace Invaders
Cumulative Reward149.2
80
Adversarial AttackBreakout Black-box discrete (test)
Cumulative Reward130.3
36
Adversarial AttackBreakout White-box discrete (test)
Cumulative Reward40.56
36
Showing 6 of 6 rows

Other info

Follow for update