With the games industry as his target, Daniel Hernandez’s main research objective is to design and implement algorithms that, without any prior knowledge, generate strong gameplaying agents for a wide variety of games. To tackle this “from scratch” learning, he uses, and contributes to, the fields of Multiagent Reinforcement Learning, Game Theory and Deep learning.
Self-play is the main object of study in his research. Self-play is a training scheme for multiagent systems in which AIs are trained by acting on an environment against themselves or previous versions of themselves. Such training scheme bypasses obstacles faced by many other training approaches which rely on existing datasets of expert moves or human / AI agents to train against. Daniel’s hope is that further development in Self-play will allow game studios of all sizes to generate strong AI agents for their games in an affordable manner.
A storyteller by nature, Daniel has a strong track record of outreach through talks and workshops both in the UK and internationally. By sharing his journey, insights and discoveries he hopes to both inspire and instruct students, researchers and developers to realise the potential that Reinforcement Learning has to improve the games industry.
His passionate work on Machine learning goes beyond crafting strong gameplaying agents. He sees the potential of using AI to simplify and automate a wide range of tasks in the games industry. He has led successful projects which used machine learning aimed at automating multiagent game balancing to alleviate the burden of manual game balancing.
Daniel received an MEng in Computing: Games, Vision & Interaction from Imperial College London. Wanting to combine the power of AI and the creativity of videogames, Daniel began a PhD journey to explore the misty lands of Multi Agent Reinforcement Learning (MARL).