Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Learning to Ground Multi-Agent Communication with Autoencoders

About

Communication requires having a common language, a lingua franca, between agents. This language could emerge via a consensus process, but it may require many generations of trial and error. Alternatively, the lingua franca can be given by the environment, where agents ground their language in representations of the observed world. We demonstrate a simple way to ground language in learned representations, which facilitates decentralized multi-agent communication and coordination. We find that a standard representation learning algorithm -- autoencoding -- is sufficient for arriving at a grounded common language. When agents broadcast these representations, they learn to understand and respond to each other's utterances and achieve surprisingly strong task performance across a variety of multi-agent communication environments.

Toru Lin, Minyoung Huh, Chris Stauffer, Ser-Nam Lim, Phillip Isola• 2021

Related benchmarks

TaskDatasetResultRank
Ad-hoc teamworkPredator Prey v1
Steps10.3
5
Ad-hoc teamworkUSAR
Steps20.3
5
Ad-hoc teamworkPredator Prey pp_v0
Steps17.5
5
Multi-agent coordinationCIFAR Dialogue (test)
Average Reward0.348
4
Multi-agent coordinationRedBlueDoors (test)
Average Reward0.984
4
Multi-agent coordinationFindGoal (test)
Average Episode Length103.5
4
Showing 6 of 6 rows

Other info

Code

Follow for update