Distributed Heuristic Multi-Agent Path Finding with Communication
About
Multi-Agent Path Finding (MAPF) is essential to large-scale robotic systems. Recent methods have applied reinforcement learning (RL) to learn decentralized polices in partially observable environments. A fundamental challenge of obtaining collision-free policy is that agents need to learn cooperation to handle congested situations. This paper combines communication with deep Q-learning to provide a novel learning based method for MAPF, where agents achieve cooperation via graph convolution. To guide RL algorithm on long-horizon goal-oriented tasks, we embed the potential choices of shortest paths from single source as heuristic guidance instead of using a specific path as in most existing works. Our method treats each agent independently and trains the model from a single agent's perspective. The final trained policy is applied to each agent for decentralized execution. The whole system is distributed during training and is trained under a curriculum learning strategy. Empirical evaluation in obstacle-rich environment indicates the high success rate with low average step of our method.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Multi-robot Exploration | 80x80 world, 16 Agents, 30% static obstacle density 100 runs | Success Rate (SR)99 | 28 | |
| Multi-robot Exploration | 40x40 world, 4 Agents, 30% static obstacle density (100 runs) | Success Rate (SR)90 | 28 | |
| Multi-Agent Path Finding | Random Map 120x120, 0.3 density | Success Rate53 | 15 | |
| Multi-Agent Path Finding | Random Map 240x240, 0.3 density | Success Rate (SR)44 | 15 |