How does machine learning affect diversity in evolutionary search?

Procedural content generation of video games levels has greatly benefited from machine learning. In such complex domains, generative models can provide representation spaces for evolutionary search. But how expressive are such learned models? How many different levels would they be able to produce? A new paper, co-authored by IGGI PhD researcher Sebastian Berns and Professor Simon Colton, looks at the limitations of generative models in the context of multi-solution optimisation. The work will be presented at the Genetic and Evolutionary Computation Conference (GECCO) and is nominated for a best paper award.

The study shows that quality diversity (QD) search in the latent space of a variational auto-encoder yields a solution set of lower diversity than in a manually-defined genetic parameter space. The authors find that learned latent spaces are useful for the comparison of artefacts and recommend their use for distance and similarity estimation. However, whenever a parametric search space is obtainable, it should be preferred over a learned representation space as it produces a higher diversity of solutions.


Alexander Hagg, Sebastian Berns, Alexander Asteroth, Simon Colton & Thomas B├Ąck. (2021). Expressivity of Parameterized and Data-driven Representations in Quality Diversity Search. In Proceedings of the Genetic and Evolutionary Computation Conference.

Pre-print available on arXiv

Accompanying code repository available on Github

Published 27 June 2021