Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Information State Embedding in Partially Observable Cooperative Multi-Agent Reinforcement Learning

About

Multi-agent reinforcement learning (MARL) under partial observability has long been considered challenging, primarily due to the requirement for each agent to maintain a belief over all other agents' local histories -- a domain that generally grows exponentially over time. In this work, we investigate a partially observable MARL problem in which agents are cooperative. To enable the development of tractable algorithms, we introduce the concept of an information state embedding that serves to compress agents' histories. We quantify how the compression error influences the resulting value functions for decentralized control. Furthermore, we propose an instance of the embedding based on recurrent neural networks (RNNs). The embedding is then used as an approximate information state, and can be fed into any MARL algorithm. The proposed embed-then-learn pipeline opens the black-box of existing (partially observable) MARL algorithms, allowing us to establish some theoretical guarantees (error bounds of value functions) while still achieving competitive performance with many end-to-end approaches.

Weichao Mao, Kaiqing Zhang, Erik Miehling, Tamer Ba\c{s}ar• 2020

Related benchmarks

TaskDatasetResultRank
Multi-Agent Reinforcement LearningBoxpushing
Reward138.4
15
Multi-Agent Reinforcement LearningDectiger
Reward-4.76
15
Showing 2 of 2 rows

Other info

Follow for update