Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Offline Multi-Agent Reinforcement Learning with Implicit Global-to-Local Value Regularization

About

Offline reinforcement learning (RL) has received considerable attention in recent years due to its attractive capability of learning policies from offline datasets without environmental interactions. Despite some success in the single-agent setting, offline multi-agent RL (MARL) remains to be a challenge. The large joint state-action space and the coupled multi-agent behaviors pose extra complexities for offline policy optimization. Most existing offline MARL studies simply apply offline data-related regularizations on individual agents, without fully considering the multi-agent system at the global level. In this work, we present OMIGA, a new offline m ulti-agent RL algorithm with implicit global-to-local v alue regularization. OMIGA provides a principled framework to convert global-level value regularization into equivalent implicit local value regularizations and simultaneously enables in-sample learning, thus elegantly bridging multi-agent value decomposition and policy learning with offline regularizations. Based on comprehensive experiments on the offline multi-agent MuJoCo and StarCraft II micro-management tasks, we show that OMIGA achieves superior performance over the state-of-the-art offline MARL methods in almost all tasks.

Xiangsen Wang, Haoran Xu, Yinan Zheng, Xianyuan Zhan• 2023

Related benchmarks

TaskDatasetResultRank
Multi-Agent Reinforcement LearningSMAC corridor (test)
Average Score17.1
12
Multi-Agent Reinforcement LearningSMAC 6h_vs_8z (test)
Average Score12.74
12
Offline Multi-Agent Reinforcement LearningMulti-agent MuJoCo Hopper expert, medium, medium-replay, medium-expert
Return859.6
12
Multi-Agent Reinforcement LearningSMAC 2c_vs_64zg (test)
Test Return19.25
6
Multi-Agent Reinforcement LearningSMAC 5m_vs_6m (test)
Test Return10.38
6
Multi-agent MicromanagementSMAC 5m_vs_6m (good)
Avg Return8.25
5
Multi-agent MicromanagementSMAC 5m_vs_6m (medium)
Average Return7.92
5
Multi-agent MicromanagementSMAC 2c_vs_64zg good
Average Return19.15
5
Multi-agent MicromanagementSMAC 2c_vs_64zg (medium)
Average Return16.03
5
Multi-agent MicromanagementSMAC 2c_vs_64zg (poor)
Average Return13.02
5
Showing 10 of 36 rows

Other info

Code

Follow for update