Improving generative machine learning approaches towards purposeful and novel creation
Models that learn an internal representation, such as generative adversarial networks (GAN) and autoencoders, have given rise to the explorative artistic practice of ‘latent space interpolation’, and thus have become a popular methodology for the ‘discovery’ of artefacts. The human author, however, largely has a passive role in its creation. While it is possible to define a domain through their choice of the data set, the algorithm is ultimately in charge of organising the given information according to its own criteria and objective in an internal representation. The human takes the role of the explorer and has to search for appealing examples in the emerging latent space. In order to gain wider control and authority over the generative process, it has to become easier to direct and guide both the creation of the internal representation and the posterior sampling from it. This open problem of “exposing and exploring the generative space” has is a desirable feature that has yet to be deeply explored (Summerville et al., “Procedural Content Generation via Machine Learning (PCGML)”, 2018). The work on the disentanglement of learned representations (Burgess et al., “Understanding Disentangling in Beta-VAE”, 2018) proposes an extension of variational autoencoders which could serve as a starting point towards this goal. Furthermore, such learned representations could serve as input to symbolic reasoning or blending systems.
Statistical generative models learn to capture the underlying distribution of a given data set. But their tendency to focus on the data’s principal regularities is what causes them to generate overly ‘averaged’ examples. High probability is correlated to a repeated appearance within a population. A model reproduces what it has been exposed to. In this context, the concept of the outlier is of much higher creative interest, as it represents those examples that deviate from the norm. Outliers, in their difference to the principal components of a learned distribution, are more likely to exhibit novel features. An additional goal, thus, is to develop a generative approach that is capable of either intentionally deviating from learned regularities or of taking into account uncertainty in the generative process in favour of exploration.
Before transitioning into computation Sebastian worked independently as a graphic designer, specialising in typography and visual communication for the web, art and culture.
Home institution: Queen Mary
Supervisor: Professor Simon Colton
Ready to apply?
Once you have identified your potential supervisor, we would encourage you to contact them to discuss your research proposal.Learn More