Diversity-Inducing Policy Gradient: Using Maximum Mean Discrepancy to Find a Set of Diverse Policies
About
Standard reinforcement learning methods aim to master one way of solving a task whereas there may exist multiple near-optimal policies. Being able to identify this collection of near-optimal policies can allow a domain expert to efficiently explore the space of reasonable solutions. Unfortunately, existing approaches that quantify uncertainty over policies are not ultimately relevant to finding policies with qualitatively distinct behaviors. In this work, we formalize the difference between policies as a difference between the distribution of trajectories induced by each policy, which encourages diversity with respect to both state visitation and action choices. We derive a gradient-based optimization technique that can be combined with existing policy gradient methods to now identify diverse collections of well-performing policies. We demonstrate our approach on benchmarks and a healthcare task.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Robot Locomotion | Humanoid | Cumulative Reward5.19e+3 | 16 | |
| Multi-Agent Reinforcement Learning | SMAC 2m1z | State Entropy0.032 | 12 | |
| Strategy Discovery | GRF 3v1 | Distinct Strategies3.7 | 11 | |
| Multi-Agent Reinforcement Learning | GRF 3v1 hard | Win Rate93 | 7 | |
| State Entropy Estimation | GRF 3v1 | State Entropy0.01 | 7 | |
| Multi-Agent Reinforcement Learning | SMAC 2c64zg | Win Rate99 | 7 | |
| Multi-Agent Reinforcement Learning | GRF Corner | Win Rate75 | 6 | |
| Strategy Discovery | GRF (CA) | Distinct Strategies2.3 | 6 | |
| Multi-Agent Reinforcement Learning | GRF (CA) | Win Rate46 | 6 | |
| Strategy Discovery | GRF Corner | Distinct Strategies1.7 | 6 |