Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Diversity-Inducing Policy Gradient: Using Maximum Mean Discrepancy to Find a Set of Diverse Policies

About

Standard reinforcement learning methods aim to master one way of solving a task whereas there may exist multiple near-optimal policies. Being able to identify this collection of near-optimal policies can allow a domain expert to efficiently explore the space of reasonable solutions. Unfortunately, existing approaches that quantify uncertainty over policies are not ultimately relevant to finding policies with qualitatively distinct behaviors. In this work, we formalize the difference between policies as a difference between the distribution of trajectories induced by each policy, which encourages diversity with respect to both state visitation and action choices. We derive a gradient-based optimization technique that can be combined with existing policy gradient methods to now identify diverse collections of well-performing policies. We demonstrate our approach on benchmarks and a healthcare task.

Muhammad A. Masood, Finale Doshi-Velez• 2019

Related benchmarks

TaskDatasetResultRank
Robot LocomotionHumanoid
Cumulative Reward5.19e+3
16
Multi-Agent Reinforcement LearningSMAC 2m1z
State Entropy0.032
12
Strategy DiscoveryGRF 3v1
Distinct Strategies3.7
11
Multi-Agent Reinforcement LearningGRF 3v1 hard
Win Rate93
7
State Entropy EstimationGRF 3v1
State Entropy0.01
7
Multi-Agent Reinforcement LearningSMAC 2c64zg
Win Rate99
7
Multi-Agent Reinforcement LearningGRF Corner
Win Rate75
6
Strategy DiscoveryGRF (CA)
Distinct Strategies2.3
6
Multi-Agent Reinforcement LearningGRF (CA)
Win Rate46
6
Strategy DiscoveryGRF Corner
Distinct Strategies1.7
6
Showing 10 of 13 rows

Other info

Follow for update