Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

High entropy leads to symmetry equivariant policies in Dec-POMDPs

About

We prove that in any Dec-POMDP, sufficiently high entropy regularization ensures that policy gradient ascent with tabular softmax parametrization always converges, for any initialization, to the same joint policy, and that this joint policy is equivariant w.r.t. all symmetries of the Dec-POMDP. In particular, policies coming from different random seeds will be fully compatible, in that their cross-play returns are equal to their self-play returns. Through extensive empirical evaluation of independent PPO in the Hanabi, Overcooked, and Yokai environments, we find that the entropy coefficient has a massive influence on the cross-play returns between independently trained policies, and that the drop in self-play returns coming from increased entropy regularization can often be counteracted by greedifying the learned policies after training. In Hanabi we achieve a new SOTA in inter-seed cross-play this way. Despite clear limitations of this recipe, which we point out, both our theoretical and empirical results indicate that during hyperparameter sweeps in Dec-POMDPs, one should consider far higher entropy coefficients than is typically done.

Johannes Forkel, Constantin Ruhdorfer, Andreas Bulling, Jakob Foerster• 2025

Related benchmarks

TaskDatasetResultRank
Multi-agent coordinationHanabi 2-player JaxMARL
SP24.49
3
Multi-agent coordinationHanabi 3-player JaxMARL
SP24.66
3
Multi-agent coordinationHanabi 4-player JaxMARL
SP24.55
1
Multi-agent coordinationHanabi 5-player JaxMARL
SP Score23.73
1
Showing 4 of 4 rows

Other info

Follow for update