Application of Neuroevolution to General Video Game Playing
In the field of artificial intelligence, great advancements in developing AI capable of playing specific games has been made over last few decades. Over the years, the potential of General Game Playing (GGP) AI, was realized, and thus a new area of research was spawned, focusing mainly on turn-based board games. Rapidly expanding, it was just recently extended to include video games and has morphed into General Video Game Playing (GVGP). The studies in this space of AI are highly attractive due to their solution capacity of being highly transferable.
As the field is relatively new, there are many different paths to explore. Some effort has already been put into incorporating the established Genetic Algorithm techniques into the area. The goal of the proposed research is to further develop models using the more complex evolutionary algorithms to find generalist solutions to the problems exposed in GVGP. More specifically, the research will aim to discover the appropriate applications and the modifications necessary of approaches such as Competitive Coevolution, circumventing its drawbacks and evolving populations capable of playing multiple games. Furthermore, in addition to other methods it will be concerned with the application of models developing generalist memory on a slower scale evolution (compared to individual in a population) with continuous state perturbations, to find closer to optimum results - adapting networks of individuals to the fitness landscape.
In order to reach the goals of the research a number of experiments will be conducted, using a select few video games as a base performance measure. Training the populations evolved will involve tuning the evolutionary operators as well as altering pre-designed system be- haviours to suitably compare the viability of applied procedures. The success of bridging EA with GVPG, along with its advantages and drawbacks in the field will be readily deter- mined, comparing the solutions found to those of other existing approaches. Specifically, the similarity of the behaviour in evolvability using genetic networks searching for solutions and learning theory, via neural networks, has recently been suggested. Evolution is defined to not have any foresight, but models were built showing how it can remember previously discovered solutions, which would imply that natural selection leans towards long term evolvability. Kostas Kouvaris et. al. further establishes the underlying equivalence of the approaches, applying machine learning techniques to improve the generalisation of EA. The generalization allows combining the features from previous experience to find individuals with new feature combinations, better adapted to unseen environments.
Were the exploratory learning methods developed in EA to perform no less satisfactorily in the gaming industry environment, given enough sample data from a handful of well defined behaviours, the AI units could be trained to adapt to the new levels they are placed in. In theory, this would then translate to the same amount of effort producing a larger variety of content or, alternatively, producing the same amount of content with less effort, distributing the excess to other areas of development or eliminating it to lower the total production cost.
Rokas is an MEng Electronic Engineering graduate from University of Southampton. Initially, pushed away from programming in school due to being taught Pascal, he realized its power in the compulsory C course in University. Applying the knowledge to building games caused the gradual shift from electronics to software development, with the 4th year modules all having the CS tag. During the undergraduate studies Rokas held the UKESF scholarship and did 2 summer internships at Imagination Technologies. Interests in game and software development got him researching neuroevolutionary machine learning for video games.