MARLIN: Multi-Agent Reinforcement Learning Guided by Language-Based Inter-Robot Negotiation
About
Multi-agent reinforcement learning is a key method for training multi-robot systems. Through rewarding or punishing robots over a series of episodes according to their performance, they can be trained and then deployed in the real world. However, poorly trained policies can lead to unsafe behaviour during early training stages. We introduce Multi-Agent Reinforcement Learning guided by language-based Inter-robot Negotiation (MARLIN), a hybrid framework in which large language models provide high-level planning before the reinforcement learning policy has learned effective behaviours. Robots use language models to negotiate actions and generate plans that guide policy learning. The system dynamically switches between reinforcement learning and language-model-based negotiation during training, enabling safer and more effective exploration. MARLIN is evaluated using both simulated and physical robots with local and remote language models. Results show that, compared to standard multi-agent reinforcement learning, the hybrid approach achieves higher performance in early training without reducing final performance. The code is available at https://github.com/SooratiLab/MARLIN.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Multi-agent Navigation | Asymmetrical Two Slot Corridor | Average Performance100 | 6 | |
| Multi-agent Navigation | Maze-Like Corridor | Average Performance100 | 6 | |
| Multi-agent Navigation | Single Slot Corridor | Average Performance100 | 6 | |
| Multi-agent Navigation | Symmetrical Two Slot Corridor | Average Performance100 | 6 | |
| Multi-agent Navigation | Two Path Corridor | Average Performance100 | 6 |