Biases for Emergent Communication in Multi-agent Reinforcement Learning
About
We study the problem of emergent communication, in which language arises because speakers and listeners must communicate information in order to solve tasks. In temporally extended reinforcement learning domains, it has proved hard to learn such communication without centralized training of agents, due in part to a difficult joint exploration problem. We introduce inductive biases for positive signalling and positive listening, which ease this problem. In a simple one-step environment, we demonstrate how these biases ease the learning problem. We also apply our methods to a more extended environment, showing that agents with these inductive biases achieve better performance, and analyse the resulting communication protocols.
Tom Eccles, Yoram Bachrach, Guy Lever, Angeliki Lazaridou, Thore Graepel• 2019
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Multi-agent coordination | CIFAR Dialogue (test) | Average Reward0.142 | 4 | |
| Multi-agent coordination | RedBlueDoors (test) | Average Reward0.729 | 4 | |
| Multi-agent coordination | FindGoal (test) | Average Episode Length158 | 4 |
Showing 3 of 3 rows