Learning Selective Communication for Multi-Agent Path Finding
About
Learning communication via deep reinforcement learning (RL) or imitation learning (IL) has recently been shown to be an effective way to solve Multi-Agent Path Finding (MAPF). However, existing communication based MAPF solvers focus on broadcast communication, where an agent broadcasts its message to all other or predefined agents. It is not only impractical but also leads to redundant information that could even impair the multi-agent cooperation. A succinct communication scheme should learn which information is relevant and influential to each agent's decision making process. To address this problem, we consider a request-reply scenario and propose Decision Causal Communication (DCC), a simple yet efficient model to enable agents to select neighbors to conduct communication during both training and execution. Specifically, a neighbor is determined as relevant and influential only when the presence of this neighbor causes the decision adjustment on the central agent. This judgment is learned only based on agent's local observation and thus suitable for decentralized execution to handle large scale problems. Empirical evaluation in obstacle-rich environment indicates the high success rate with low communication overhead of our method.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Multi-Agent Path Finding (MAPF) | random 32x32-20 | Success Rate100 | 77 | |
| Multi-Agent Path Finding (MAPF) | random 64x64-20 | Success Rate97 | 73 | |
| Multi-Agent Path Finding (MAPF) | den312d 65x81 | Success Rate100 | 32 | |
| Multi-Agent Path Finding (MAPF) | warehouse 161x63 | Success Rate100 | 31 | |
| Multi-Agent Path Finding | Random Map 120x120, 0.3 density | Success Rate87 | 15 | |
| Multi-Agent Path Finding | Random Map 240x240, 0.3 density | Success Rate (SR)88 | 15 |