Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Adversarial Inception Backdoor Attacks against Reinforcement Learning

About

Recent works have demonstrated the vulnerability of Deep Reinforcement Learning (DRL) algorithms against training-time, backdoor poisoning attacks. The objectives of these attacks are twofold: induce pre-determined, adversarial behavior in the agent upon observing a fixed trigger during deployment while allowing the agent to solve its intended task during training. Prior attacks assume arbitrary control over the agent's rewards, inducing values far outside the environment's natural constraints. This results in brittle attacks that fail once the proper reward constraints are enforced. Thus, in this work we propose a new class of backdoor attacks against DRL which are the first to achieve state of the art performance under strict reward constraints. These "inception" attacks manipulate the agent's training data -- inserting the trigger into prior observations and replacing high return actions with those of the targeted adversarial behavior. We formally define these attacks and prove they achieve both adversarial objectives against arbitrary Markov Decision Processes (MDP). Using this framework we devise an online inception attack which achieves an 100\% attack success rate on multiple environments under constrained rewards while minimally impacting the agent's task performance.

Ethan Rathbun, Alina Oprea, Christopher Amato• 2024

Related benchmarks

TaskDatasetResultRank
Backdoor Attack on Reinforcement LearningQ*bert Discrete (evaluation)
BR1.77e+4
5
Backdoor Attack on Reinforcement LearningFrogger Discrete (evaluation)
Baseline Performance437.9
5
Backdoor Attack on Reinforcement LearningPacman Discrete (evaluation)
Backdoor Rate (BR)457.1
5
Backdoor Attack on Reinforcement LearningBreakout Discrete (evaluation)
Baseline Reward456.1
5
Showing 4 of 4 rows

Other info

Follow for update