Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Certifiably Robust Policy Learning against Adversarial Communication in Multi-agent Systems

About

Communication is important in many multi-agent reinforcement learning (MARL) problems for agents to share information and make good decisions. However, when deploying trained communicative agents in a real-world application where noise and potential attackers exist, the safety of communication-based policies becomes a severe issue that is underexplored. Specifically, if communication messages are manipulated by malicious attackers, agents relying on untrustworthy communication may take unsafe actions that lead to catastrophic consequences. Therefore, it is crucial to ensure that agents will not be misled by corrupted communication, while still benefiting from benign communication. In this work, we consider an environment with $N$ agents, where the attacker may arbitrarily change the communication from any $C<\frac{N-1}{2}$ agents to a victim agent. For this strong threat model, we propose a certifiable defense by constructing a message-ensemble policy that aggregates multiple randomly ablated message sets. Theoretical analysis shows that this message-ensemble policy can utilize benign communication while being certifiably robust to adversarial communication, regardless of the attacking algorithm. Experiments in multiple environments verify that our defense significantly improves the robustness of trained policies against various types of attacks.

Yanchao Sun, Ruijie Zheng, Parisa Hassanzadeh, Yongyuan Liang, Soheil Feizi, Sumitra Ganesh, Furong Huang• 2022

Related benchmarks

TaskDatasetResultRank
Multi-agent coordinationHallway 4x5x6
Average Win Rate98
24
Multi-agent coordinationLBF 3p-1f
Average Win Rate77
16
Multi-agent coordinationSMAC 1o10b_vs_1r
Win Rate52
16
Multi-agent coordinationTJ slow
Average Win Rate15
16
Showing 4 of 4 rows

Other info

Follow for update