Diversity Policy Gradient for Sample Efficient Quality-Diversity Optimization

T. Pierrot 1 | V. Macé 1 | F. Chalumeau 1 | A. Flajolet 1 | G. Cideron 1 | K. Beguir 1 | A. Cully 2 | O. Sigaud 3 | N. Perrin-Gilbert 3

1 InstaDeep | 2 Imperial College London | 3 Sorbonne Université

Published

ABSTRACT

A fascinating aspect of nature lies in its ability to produce a large and diverse collection of organisms that are all high-performing in their niche. By contrast, most AI algorithms focus on finding a single efficient solution to a given problem. Aiming for diversity in addition to performance is a convenient way to deal with the exploration-exploitation trade-off that plays a central role in learning. It also allows for increased robustness when the returned collection contains several working solutions to the considered problem, making it well-suited for real applications such as robotics. Quality-Diversity (QD) methods are evolutionary algorithms designed for this purpose. This paper proposes a novel algorithm, qd-pg, which combines the strength of Policy Gradient algorithms and Quality Diversity approaches to produce a collection of diverse and high-performing neural policies in continuous control environments. The main contribution of this work is the introduction of a Diversity Policy Gradient (DPG) that exploits information at the time-step level to drive policies towards more diversity in a sample efficient manner. Specifically, qd-pg selects neural controllers from a map-elites grid and uses two gradient-based mutation operators to improve both quality and diversity. Our results demonstrate that qd-pg is significantly more sample-efficient than its evolutionary competitors.

The full paper can be accessed on arXiv.org.

InstaDeep
Privacy Overview

Please read our extensive Privacy policy here. You can also read our Privacy Notice and our Cookie Notice