Beyond Monotonicity: Revisiting Factorization Principles in Multi-Agent Q-Learning
About
Value decomposition is a central approach in multi-agent reinforcement learning (MARL), enabling centralized training with decentralized execution by factorizing the global value function into local values. To ensure individual-global-max (IGM) consistency, existing methods either enforce monotonicity constraints, which limit expressive power, or adopt softer surrogates at the cost of algorithmic complexity. In this work, we present a dynamical systems analysis of non-monotonic value decomposition, modeling learning dynamics as continuous-time gradient flow. We prove that, under approximately greedy exploration, all zero-loss equilibria violating IGM consistency are unstable saddle points, while only IGM-consistent solutions are stable attractors of the learning dynamics. Extensive experiments on both synthetic matrix games and challenging MARL benchmarks demonstrate that unconstrained, non-monotonic factorization reliably recovers IGM-optimal solutions and consistently outperforms monotonic baselines. Additionally, we investigate the influence of temporal-difference targets and exploration strategies, providing actionable insights for the design of future value-based MARL algorithms.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Value function estimation | Game B Matrix Game | Estimated Payoff12 | 18 | |
| Matrix Game Value Estimation | Game A | -- | 9 | |
| Value function estimation | Game B matrix (train) | -- | 9 | |
| Value function estimation | Matrix Game B v1 (test) | Q1 Value (A)6.2 | 4 | |
| Matrix Game | Game A | Joint Payoff Estimate12 | 2 | |
| Matrix Game | Game B | Estimated Joint Payoff12 | 2 |