Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

AlberDICE: Addressing Out-Of-Distribution Joint Actions in Offline Multi-Agent RL via Alternating Stationary Distribution Correction Estimation

About

One of the main challenges in offline Reinforcement Learning (RL) is the distribution shift that arises from the learned policy deviating from the data collection policy. This is often addressed by avoiding out-of-distribution (OOD) actions during policy improvement as their presence can lead to substantial performance degradation. This challenge is amplified in the offline Multi-Agent RL (MARL) setting since the joint action space grows exponentially with the number of agents. To avoid this curse of dimensionality, existing MARL methods adopt either value decomposition methods or fully decentralized training of individual agents. However, even when combined with standard conservatism principles, these methods can still result in the selection of OOD joint actions in offline MARL. To this end, we introduce AlberDICE, an offline MARL algorithm that alternatively performs centralized training of individual agents based on stationary distribution optimization. AlberDICE circumvents the exponential complexity of MARL by computing the best response of one agent at a time while effectively avoiding OOD joint action selection. Theoretically, we show that the alternating optimization procedure converges to Nash policies. In the experiments, we demonstrate that AlberDICE significantly outperforms baseline algorithms on a standard suite of MARL benchmarks.

Daiki E. Matsunaga, Jongmin Lee, Jaeseok Yoon, Stefanos Leonardos, Pieter Abbeel, Kee-Eung Kim• 2023

Related benchmarks

TaskDatasetResultRank
Cooperative Multi-Agent Reinforcement LearningSMAC 3s5z Hard
Mean Success Rate47
6
Cooperative Multi-Agent Reinforcement LearningSMAC 5m_vs_6m Hard
Mean Success Rate0.24
6
Cooperative Multi-Agent Reinforcement LearningSMAC Corridor (SH)
Mean Success Rate98
6
Cooperative Multi-Agent Reinforcement LearningSMAC 6hvs8z SH
Mean Success Rate21
6
Cooperative Multi-Agent Reinforcement LearningSMAC 8m_vs_9m Hard
Mean Success Rate0.67
6
Cooperative Multi-Agent Reinforcement LearningSMAC 3s5z_vs_3s6z (SH)
Mean Success Rate63
6
Multi-Agent Reinforcement LearningGoogle Research Football (GRF) RPS (test)
Mean Success Rate75
6
Multi-Agent Reinforcement LearningGoogle Research Football (GRF) CA-Hard (test)
Mean Success Rate0.83
6
Multi-Agent Reinforcement LearningGoogle Research Football (GRF) Corner (test)
Mean Success Rate36
6
Offline Multi-Agent Reinforcement LearningBridge Optimal
Mean Return-1.27
6
Showing 10 of 14 rows

Other info

Code

Follow for update