DACOM: Learning Delay-Aware Communication for Multi-Agent Reinforcement Learning
About
Communication is supposed to improve multi-agent collaboration and overall performance in cooperative Multi-agent reinforcement learning (MARL). However, such improvements are prevalently limited in practice since most existing communication schemes ignore communication overheads (e.g., communication delays). In this paper, we demonstrate that ignoring communication delays has detrimental effects on collaborations, especially in delay-sensitive tasks such as autonomous driving. To mitigate this impact, we design a delay-aware multi-agent communication model (DACOM) to adapt communication to delays. Specifically, DACOM introduces a component, TimeNet, that is responsible for adjusting the waiting time of an agent to receive messages from other agents such that the uncertainty associated with delay can be addressed. Our experiments reveal that DACOM has a non-negligible performance improvement over other mechanisms by making a better trade-off between the benefits of communication and the costs of waiting for messages.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Cooperative Navigation | Cooperative Navigation easy | Mean Episode Reward3.16 | 14 | |
| Cooperative Navigation | CN MPE medium | Mean Episode Reward3.21 | 7 | |
| Cooperative Navigation | CN MPE hard | Mean Episode Reward3.37 | 7 | |
| Cooperative Navigation | CN MPE super_hard | Mean Episode Reward3.29 | 7 | |
| Multi-agent cooperation | SMAC 1o_2r_vs_4r hard | Win Rate40.98 | 7 | |
| Multi-agent cooperation | SMAC 1o_2r_vs_4r super_hard | Win Rate38.28 | 7 | |
| Multi-agent cooperation | SMAC 1o_2r_vs_4r medium | Win Rate35.94 | 7 | |
| Multi-agent cooperation | SMAC 1o_10b_vs_1r hard | Win Rate12.25 | 7 | |
| Multi-agent cooperation | SMAC 1o_10b_vs_1r super_hard | Win Rate15.74 | 7 | |
| Cooperative Navigation | Cooperative Navigation super_hard | Mean Episode Reward-3 | 7 |