Diversity Policy Gradient for Sample Efficient Quality-Diversity Optimization
About
A fascinating aspect of nature lies in its ability to produce a large and diverse collection of organisms that are all high-performing in their niche. By contrast, most AI algorithms focus on finding a single efficient solution to a given problem. Aiming for diversity in addition to performance is a convenient way to deal with the exploration-exploitation trade-off that plays a central role in learning. It also allows for increased robustness when the returned collection contains several working solutions to the considered problem, making it well-suited for real applications such as robotics. Quality-Diversity (QD) methods are evolutionary algorithms designed for this purpose. This paper proposes a novel algorithm, QDPG, which combines the strength of Policy Gradient algorithms and Quality Diversity approaches to produce a collection of diverse and high-performing neural policies in continuous control environments. The main contribution of this work is the introduction of a Diversity Policy Gradient (DPG) that exploits information at the time-step level to drive policies towards more diversity in a sample-efficient manner. Specifically, QDPG selects neural controllers from a MAP-Elites grid and uses two gradient-based mutation operators to improve both quality and diversity. Our results demonstrate that QDPG is significantly more sample-efficient than its evolutionary competitors.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Quality-Diversity Optimization | LSI | QD-score-8.53 | 12 | |
| Quality-Diversity Optimization | Image Composition (IC) domain | Mean Objective74.49 | 7 | |
| Quality-diversity (QD) optimization | Latent Space Illumination Hard | QD Score-6.81 | 7 |