Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Diversity Policy Gradient for Sample Efficient Quality-Diversity Optimization

About

A fascinating aspect of nature lies in its ability to produce a large and diverse collection of organisms that are all high-performing in their niche. By contrast, most AI algorithms focus on finding a single efficient solution to a given problem. Aiming for diversity in addition to performance is a convenient way to deal with the exploration-exploitation trade-off that plays a central role in learning. It also allows for increased robustness when the returned collection contains several working solutions to the considered problem, making it well-suited for real applications such as robotics. Quality-Diversity (QD) methods are evolutionary algorithms designed for this purpose. This paper proposes a novel algorithm, QDPG, which combines the strength of Policy Gradient algorithms and Quality Diversity approaches to produce a collection of diverse and high-performing neural policies in continuous control environments. The main contribution of this work is the introduction of a Diversity Policy Gradient (DPG) that exploits information at the time-step level to drive policies towards more diversity in a sample-efficient manner. Specifically, QDPG selects neural controllers from a MAP-Elites grid and uses two gradient-based mutation operators to improve both quality and diversity. Our results demonstrate that QDPG is significantly more sample-efficient than its evolutionary competitors.

Thomas Pierrot, Valentin Mac\'e, F\'elix Chalumeau, Arthur Flajolet, Geoffrey Cideron, Karim Beguir, Antoine Cully, Olivier Sigaud, Nicolas Perrin-Gilbert• 2020

Related benchmarks

TaskDatasetResultRank
Quality-Diversity OptimizationLSI
QD-score-8.53
12
Quality-Diversity OptimizationImage Composition (IC) domain
Mean Objective74.49
7
Quality-diversity (QD) optimizationLatent Space Illumination Hard
QD Score-6.81
7
Showing 3 of 3 rows

Other info

Follow for update