Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Distributionally Robust Cooperative Multi-Agent Reinforcement Learning via Robust Value Factorization

About

Cooperative multi-agent reinforcement learning (MARL) commonly adopts centralized training with decentralized execution, where value-factorization methods enforce the individual-global-maximum (IGM) principle so that decentralized greedy actions recover the team-optimal joint action. However, the reliability of this recipe in real-world settings remains unreliable due to environmental uncertainties arising from the sim-to-real gap, model mismatch, and system noise. We address this gap by introducing Distributionally robust IGM (DrIGM), a principle that requires each agent's robust greedy action to align with the robust team-optimal joint action. We show that DrIGM holds for a novel definition of robust individual action values, which is compatible with decentralized greedy execution and yields a provable robustness guarantee for the whole system. Building on this foundation, we derive DrIGM-compliant robust variants of existing value-factorization architectures (e.g., VDN/QMIX/QTRAN) that (i) train on robust Q-targets, (ii) preserve scalability, and (iii) integrate seamlessly with existing codebases without bespoke per-agent reward shaping. Empirically, on high-fidelity SustainGym simulators and a StarCraft game environment, our methods consistently improve out-of-distribution performance. Code and data are available at https://github.com/crqu/robust-coMARL.

Chengrui Qu, Christopher Yeh, Kishan Panaganti, Eric Mazumdar, Adam Wierman• 2026

Related benchmarks

TaskDatasetResultRank
Cooperative Multi-Agent Reinforcement LearningSustainGym BuildingEnv season_2 (test)
Normalized Episodic Return91.6
12
Cooperative Multi-Agent Reinforcement LearningSustainGym BuildingEnv climatic and seasonal shifts
Normalized Episodic Returns73.3
12
Showing 2 of 2 rows

Other info

Follow for update