Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

TrojDRL: Trojan Attacks on Deep Reinforcement Learning Agents

About

Recent work has identified that classification models implemented as neural networks are vulnerable to data-poisoning and Trojan attacks at training time. In this work, we show that these training-time vulnerabilities extend to deep reinforcement learning (DRL) agents and can be exploited by an adversary with access to the training process. In particular, we focus on Trojan attacks that augment the function of reinforcement learning policies with hidden behaviors. We demonstrate that such attacks can be implemented through minuscule data poisoning (as little as 0.025% of the training data) and in-band reward modification that does not affect the reward on normal inputs. The policies learned with our proposed attack approach perform imperceptibly similar to benign policies but deteriorate drastically when the Trojan is triggered in both targeted and untargeted settings. Furthermore, we show that existing Trojan defense mechanisms for classification tasks are not effective in the reinforcement learning setting.

Panagiota Kiourti, Kacper Wardega, Susmit Jha, Wenchao Li• 2019

Related benchmarks

TaskDatasetResultRank
Robot navigationTurtleBot3 (real-world deployment)
CSR (%)85.6
10
Backdoor Attack on Reinforcement LearningBreakout Discrete (evaluation)
Baseline Reward495.2
5
Backdoor Attack on Reinforcement LearningQ*bert Discrete (evaluation)
BR1.71e+4
5
Backdoor Attack on Reinforcement LearningFrogger Discrete (evaluation)
Baseline Performance373.2
5
Backdoor Attack on Reinforcement LearningPacman Discrete (evaluation)
Backdoor Rate (BR)427.8
5
Robotic NavigationSafety Gymnasium Safety Car
ASR86.7
3
Robotic NavigationCar Racing Box2D Gymnasium
Success Rate (ASR)73
3
Stock TradingTrade BTC Gym Trading Env
ASR0.63
3
Video Game PlayingBreakout Atari Gymnasium
ASR99.8
3
Video Game PlayingQbert Atari Gymnasium
ASR98.4
3
Showing 10 of 11 rows

Other info

Follow for update