IGGI PhD Projects 2021

PhD projects offered for 2021:  If you are interested in any of the projects listed and would like further details and/or to discuss, please email the project supervisor. Thank you.

(List updated on 23 November 2020)

 

Dr Claudio Guarnera, University of York.  Email claudio.guarnera@york.ac.uk

Project 1: Investigating human colour constancy mechanisms by means of games and AR/VR

Colour constancy is the property of the Human Visual System that allows us to adapt to the illumination in a scene, and perceive surface colours as a constant, intrinsic property of real-world objects, even when the illumination changes. Current understanding of such mechanisms is limited, since it has been derived by means of experiments on flat, isolated patches on uniform background, very different from real-world scenarios. This project will make use of games and VR/AR to extend the understanding of colour constancy, by investigating at once the effect of several real-world cues used by the HVS, while observers are immersed in natural environments.

 

Dr Claudio Guarnera, University of York. Email claudio.guarnera@york.ac.uk

Project 2: Machine-learning based skin rendering for games

Simulating photo-realistic appearance of human face and skin plays a fundamental role in videogames. In fact, human perception creates a strong revulsion toward things that appear almost human, but not quite: a phenomenon named the “uncanny valley”.
Human skin has a complex structure, with several layers containing blood vessels, connective tissues, etc. Furthermore, skin colour changes with time, and depends on both the emotional and physical state. Therefore, it difficult to simulate realistic skin appearance in real-time.
This project will extend the state of the art on real-time facial skin rendering for games, in both static and dynamic conditions, relying on machine-learning techniques.

Links: https://pure.york.ac.uk/portal/en/publications/practical-measurement-and-reconstruction-of-spectral-skin-reflectance(74f105b8-c195-45da-b1fa-b55e3b0129b6).html

 

Dr Claudio Guarnera, University of York. Email claudio.guarnera@york.ac.uk

Project 3: Consistent material appearance in game development tools

Material appearance of virtual objects depends on the underlying material model implementation in rendering software and game development packages. Digital 3D assets for games evolve through collaboration among several teams and it is common to use many different 3D tools. A lack of standards to exchange material parameters and data between rendering tools means that artists in digital 3D prototyping for games often have to manually match the appearance of materials to a reference, by tweaking available parameters. This process is time consuming and error prone. This project will focus on automatic solutions to enhance digital creativity, by providing consistent material appearance across different rendering tools and material models.

Links: https://pure.york.ac.uk/portal/en/publications/perceptually-validated-crossrenderer-analytical-brdf-parameter-remapping(3add34b8-5962-40d0-8106-e1e8d37659aa).html

 

Dr Patrik Huber, University of York. Email patrik.huber@york.ac.uk

Project 1:  Creating 3D face avatars of players for immersive playing and social experiences

This project aims to use computer vision and machine learning techniques to automatically create 3D face avatars of players from images or a video of the person. The student is expected to have a strong background and interest in computer vision, deep learning, computer graphics, and linear algebra.
Further reading:
- https://onlinelibrary.wiley.com/doi/abs/10.1111/cgf.13382
- https://dl.acm.org/doi/abs/10.1145/3395208

 

Dr Patrik Huber, University of York. Email patrik.huber@york.ac.uk

Project 2: Using automatic face analytics for professional e-sports

This project aims to use and develop automatic face analysis techniques to analyse professional e-sports players while they are performing live matches or training. The aim is to use this data to analyse players’ emotional and physiological behaviour to then suggest how a player’s performance could be improved. The student is expected to have a strong background and interest in computer vision, deep learning, and linear algebra.

 

Dr Anne Hsu, Queen Mary University of London. Email anne.hsu@qmul.ac.uk

Project 1: AI for Coaching Difficult Communication and Conflict Resolution

This project builds on an existing AI that coaches people on language that is useful for handling difficult conversations, giving feedback, and conflict resolution. Currently, it is incorporated into an online training course, but there are many opportunities to expand this in larger, game-related contexts.

 

Dr Jo Iacovides, University of York. Email jo.iacovides@york.ac.uk

Project 1: Persuasive games: the role of emotion

Games and gamified techniques are increasingly being used for persuasive purposes such as changing people’s attitudes and behaviours. Though many interventions indicate short term benefits, questions remain about how to design playful approaches that have long lasting effects. In particular, the role of emotion in these experiences is not well understood. While recent work in HCI and games has highlighted how gameplay can involve a range of complex emotions, less is known about what sorts of emotional response are able to stimulate and sustain persuasive effects over the long-term. The research could involve the use digital technologies, augmented reality and/or virtual reality. Potential domains include environmental or health-related behaviour change.

 

Dr Lorenzo Jamone, Queen Mary University of London.  Email l.jamone@qmul.ac.uk

Project 1: Tactile interaction with Virtual Reality content. (Project added 23/11/20).

Most feedback in current VR applications is visual. But what if you could "touch" and "feel" everything you see in VR? Tactile videogames, tactile internet, tactile TV! Feeling the texture of virtual objects, understanding whether they are hard or soft, making it easier to pick them and move them around. In this project the student will explore the use of vibrating motors distributed over the human hand (e.g. using a wearable glove) to give tactile feedback about the physical interactions happening in a Virtual Reality. The project will require a very basic knowledge of electronics and good programming skills.

 

Dr Lorenzo Jamone, Queen Mary University of London.  Email l.jamone@qmul.ac.uk

Project 2: Artificial creativity: the creation and use of new tools. (Project added 23/11/20).

The ability to create and use tools is one of the most striking manifestations of animal intelligence. Indeed, the creation of new tools has marked the evolution of the human species over history, being one of the most important factors helping humans to become the dominant species on earth. But what are the main cognitive processes underlying this special kind of creativity? And how can they be reproduced in an artificial agent? In this project the student will explore the psychology literature on animal and human tool use, and will develop a computational model that supports "tool innovation": the creation and use of a novel tool for a given task (e.g. creating the wheel!). The project will require very good programming skills, and possibly some background in machine learning and AI, in addition to a keen interest in human and animal cognition.

 

Dr Lorenzo Jamone, Queen Mary University of London.  Email l.jamone@qmul.ac.uk

Project 3: Serious games: a tactile Rubik's cube. (Project added 23/11/20).

 Imagine the classic Rubik's cube. But this time to solve the cube you do not need to rotate the faces; you "just" need to touch the coloured squares. But you need to do it in the right way! To win the game, the user should discover the correct "tactile pattern" (i.e. how to touch the cube); when the correct pattern is discovered, the user must "remember it" for a few more trials; then, a new pattern must be discovered. In this project the student will develop the AI of this tactile game: what novel patterns to provide to the user in order to train specific cognitive abilities, e.g. memory, attention, problem solving. Existing brain games are mostly visual... but this one will be different!  The "hardware" will be a 3D printed sensorized cube that can collect tactile and motion data and send them to a PC/smartphone app via wireless connection (e.g. Bluetooth). The project requires basic electronics skills and very good programming skills, ideally with some background in machine learning and AI.

 

Professor Simon Lucas, Queen Mary University of London. Email simon.lucas@qmul.ac.uk

Project 1: Game AI for Real-World Decision Making

Recent progress in Game AI has demonstrated that given enough data from human gameplay, or experience gained via simulations, machines can rival or surpass the most skilled human players in classic games such as Go, or commercial computer games such as Starcraft.

The aim of this project is to understand how game AI could be applied to improve real-world decision making, both by building better simulation models and developing AI that’s better suited to messy real-world situations.

For more details see Goodman, Risi and Lucas (2020) https://arxiv.org/abs/2009.08922

 

Professor Simon Lucas, Queen Mary University of London. Email simon.lucas@qmul.ac.uk

Project 2: Hierarchical Statistical Planning for Game AI

Statistical Forward Planning algorithms such as Monte Carlo Tree Search and Rolling Horizon Evolution often perform amazingly well across a range of games. However, in some cases the action-space of a game is low-level and requires long action sequences in order to achieve meaningful effects, causing particular difficulties when the reward landscape is flat. A possible solution is to form plans in a higher level or macro action spaces. The aim of this project is to further the state of the art in this area and demonstrate progress on a range of challenging games.

 

Dr Fiona McNab, University of York.  Email fiona.mcnab@york.ac.uk

Project 1: Understanding age-related changes in cognition using games

With data collected with smartphone games we have uncovered interesting changes in the way we hold information in mind as we age: https://www.pnas.org/content/pnas/112/20/6515.full.pdf  This raises many interesting questions about the nature of these changes, why they seem to be greater for some individuals compared to others, and how these changes might affect cognition.

 

Dr Fiona McNab, University of York.  Email fiona.mcnab@york.ac.uk

Project 2: Cognitive training using games

Cognitive training is a controversial topic, but some properly controlled scientific studies have given promising results. However, positive findings in older adults have been particularly limited. Using insights about the nature of cognitive change associated with healthy ageing to develop scientifically-informed training may be the answer.

 

Dr Fiona McNab, University of York.  Email fiona.mcnab@york.ac.uk

Project 3: Understanding the limitations of working memory and the role of attention using games

Our ability to hold information in mind for a short time (working memory) is vital for daily life. Working memory capacity is limit, and varies between individuals. Our recent work, using data collected with smartphone games, has identified two potential bases for our limited working memory capacity, which appear to involve separate mechanisms:

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4035130/pdf/xhp_40_3_960.pdf
https://www.pnas.org/content/pnas/112/20/6515.full.pdf

This raises many questions about the nature of these mechanisms and their contribution to cognition.

 

Professor Ioannis Patras, Queen Mary University of London. Email i.patras@qmul.ac.uk

Project 1: Multi-modal non-verbal Behaviour Analysis in the context of games

This project aims at developing Machine Learning methodologies for the analysis at various levels of human behaviour in the context of computer gaming. This involves the recognition of the player’s affective state, engagement, perception of difficulty and performance based on the analysis of facial expressions, body gestures and possibly physiological signals from wearable sensors. The focus will be on Deep Learning architectures and in particular on ways of learning in the presence of noise, uncertainty and lack of large amounts of data, inter-personal and quickly adaptable models.

Please, look at related publications at:
https://scholar.google.co.uk/citations?user=OBYLxRkAAAAJ&hl=en (in particular works with Wenxuan Mou, Yang, Koelstra).

 

Dr Philip Quinlan, University of York. Email philip.quinlan@york.ac.uk

Project 1: Reading and gaming

The aim of the project is to harness the power of electronic hand-held devices (i.e., smartphones, tablets) to develop evidenced-based interactive software (an app) to facilitate reading in boys who are struggling to learn to read. The basic idea is to embody simple reading tasks in the context of an interactive game. Part of the novelty will be in using spoken word recognition tasks.

 

Dr Paulo Rauber, Queen Mary University of London. Email p.rauber@qmul.ac.uk

Project 1: Principled and Scalable Exploration Techniques for Reinforcement Learning (Project added on 23/11/20).

Reinforcement learning has received significant attention due to its success in training agents that play popular games such as Go, Starcraft II, Dota 2, and others. Inefficient exploration, one of the earliest problems recognized in the field, still limits the success of reinforcement learning approaches that do not require domain knowledge. Although techniques like posterior sampling convincingly solve hard exploration problems in simple domains (https://searchworks.stanford.edu/view/11891201), scalable exploration techniques remain elusive. In this project, you will develop principled and scalable exploration techniques based on reducing model uncertainty (https://arxiv.org/abs/1609.04436).

 

Professor Josh Reiss, Queen Mary University of London.  Email Joshua.reiss@qmul.ac.uk

Project 1: Machine learning of procedural audio

Game sound design relies heavily on pre-recorded samples, but this approach is inflexible, repetitive and uncreative. An alternative is procedural audio, where sounds are created in real-time using software algorithms. But many procedural audio techniques are low quality, or tailored only to a narrow class of sounds. Machine learning from sample libraries to select, optimise and improve the procedural models, could be the key to transforming the industry and creating procedural auditory worlds. This work will build on recent high impact research from the team to investigate whether procedural audio can fully replace the use of pre-recorded sound effects.

See http://fxive.com for examples of procedural sound effects

 

Professor Josh Reiss, Queen Mary University of London.  Email Joshua.reiss@qmul.ac.uk

Project 2: Exploiting game graphics rendering for sound generation

Procedural content generation supports creation of rich and varied games, but sound design has not kept pace with such innovation. Often the visual aspects of every object in the scene may be procedurally rendered, yet sound designers still rely on pre-recorded sample libraries. However, much of the information required to determine the sounds is already there. The size, shape, material and density of objects has been set in order to determine how they are rendered. This topic explores how existing animation information, available in the Game Engine, may be used to generate the sounds produced when objects interact.

 

Professor Josh Reiss, Queen Mary University of London.  Email Joshua.reiss@qmul.ac.uk

Project 3: Impossible physical models

Games often create and simulate worlds where interaction in the game is driven by physics. But what if the rules of physics were different? Imagine if the speed of light was slowed, gravity was not constant, liquids had different viscosities and materials had different elasticities. This topic will explore how to create authentic simulations of worlds with unreal physical properties, and use them in a game context.

 

Professor Josh Reiss, Queen Mary University of London.  Email Joshua.reiss@qmul.ac.uk

Project 4: Automatic mixing for game audio

Recent years have seen tremendous growth in intelligent systems that can mix and produce multitrack music content without the need for human intervention. Game content, with a huge potential number of audio sources, suffers from masking, poor intelligibility, and a lack of clarity and focus. It would clearly benefit from a ‘robot sound engineer’ inside the games console, manipulating content based on the interaction between sound assets. But the rules for game content are quite different from music, e.g spatial positioning is dictated by the game play. This topic will explore and evaluate intelligent systems to automatically mix game audio.

See B. De Man, J. D. Reiss and R. Stables, 'Ten years of automatic mixing,' 3rd Workshop on Intelligent Music Production, Salford, UK, 15 September 2017.

https://www.eecs.qmul.ac.uk/~josh/documents/2017/WIMP2017_DeManEtAl.pdf

 

Dr Søren Riis, Queen Mary University of London.  Email s.riis@qmul.ac.uk

Project 1: Information theory and combinatorics for deep-learning

Several breakthrough developments in deep-learning have created a surge in applied AI. The theoretical frameworks supporting these advances are lagging behind. The aim is to use practical implementation of experimental models to develop further the theory of one of the following topics:

Entropy for autoregressive language models, Information transfer and transfer learning, Joint information in a multi-agent system, Active learning and adaptive versus non-adaptive learning models, Information theory and generative adversarial network, Entropy and games AI, Machine learning and compression measures, Cross-entropy measures, Entropy measures for decision forest, Combinatorics and learnability.

 

Dr Søren Riis, Queen Mary University of London.  Email s.riis@qmul.ac.uk

Project 2: Autoregressive language models for automated theorem proving

One long term challenge is to apply deep learning to mathematics.
The project is to explore the application of transformer-based language models to theorem proving. The successful student needs to have a strong background in mathematics. The student should expect to spend around three months learning a suitable software for automated theorem proving, and learn to run state of the art libraries for transformer-based language models. The project is expected to focus on a specific sub-area of mathematics, e.g. within Algebra, Analysis or Combinatorics. For pointers to recent relevant work see https://arxiv.org/abs/2009.03393

 

Dr Søren Riis, Queen Mary University of London.  Email s.riis@qmul.ac.uk

Project 3: Development of artificial language through cooperative multi-agent reinforcement learning

The proposed research focuses on the development of new Multi-agent reinforcement learning algorithms for solving cooperative tasks via communication. It aims at combining ideas from recent research on the topic with hierarchical reinforcement learning to create a framework for artificial language learning. This research is partly motivated by the challenge of developing and investigating the interaction between humans and intelligent agents.

 

Dr Søren Riis, Queen Mary University of London.  Email s.riis@qmul.ac.uk

Project 4: Deep learning in Chemoinformatics

Graph transformation forms a natural model for chemical reaction systems and provides a sufficient level of detail to track individual atoms.  This project is expected to be in Collaboration with Professor Jotun Hein (Oxford) and me. RDKit is a powerful tool for investigating chemoinformatics, and this research aims at combining deep learning models with mathematical models of chemistry. Some interest in Chemistry is desirable by not essential. Good programming skills (Python) are imperative.

 

Professor Mark Sandler, Queen Mary University of London.  Email mark.sandler@qmul.ac.uk

Project 1: Intelligent Virtual Acoustics

In game play, it is important for the realism of the auditory experience to match the realism of the visual experience. This project will explore ways to create convincing acoustic environments with low computational resource, to be able to render acoustic scenes in consumer-grade devices.

Following recent work by a PhD student, this research will use Scattering Delay Networks to efficient model acoustic spaces [1], and a combination of VBAP (Vector-based Amplitude Panning) [2] and binaural audio rendering to deliver the experience to the player. Currently, the technology can only deliver static experiences, so this project will explore enhancing performance to cover 6 degrees of freedom (6DoF) [3] which means the player can move not only their head but relocate the body, either physically or virtually, to move from room to room. What’s really important is to create fast, smooth, realistic movement between rooms and from indoor to outdoor scenarios.

Students would need a strong background in one of more of the following: Acoustics, Digital Signal Processing, games engines, user interaction design.

[1] E. De Sena, H. Hacιhabiboğlu, Z. Cvetković and J. O. Smith, "Efficient Synthesis of Room Acoustics via Scattering Delay Networks," in IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 23, no. 9, pp. 1478-1492, Sept. 2015, doi: 10.1109/TASLP.2015.2438547.
[2] V. Pulkki, "Virtual Sound Source Positioning Using Vector Base Amplitude Panning," J. Audio Eng. Soc., vol. 45, no. 6, pp. 456-466, (1997 June.).
[3] Llewellyn, Gareth and Paterson, Justin (2020) Towards 6DOF: 3-D Audio for virtual, augmented and mixed realities. In: 3D Audio. Perspectives on Music Production. Routledge, NYC, USA. (In Press)

 

Professor Mark Sandler, Queen Mary University of London.  Email mark.sandler@qmul.ac.uk

Project 2: Virtual Placement of Objects in Acoustic Scenes

As Augmented Reality experiences are growing in importance, and the cost of the technology falls, it is increasingly of interest to develop advanced ways to insert “auditory objects” within mixed virtual-real scenes so that they interact acoustically with their environment exactly as if they were physically present. Examples of auditory objects include musical instruments, humans speaking, gun shots, and so on. This approach has the potential to increase immersion in films, games and music, wherever and however they are consumed – headphones, earbuds, stereo, 5.1 etc.

A significant difference compared to current approaches is that the virtual objects will have realistic dispersion characteristics and will interact acoustically (think: reverberation) as if they are really present in the physical space they are being rendered into. This will increase the engagement of players in Augmented Reality gaming by making the auditory experience more indistinguishable from physical reality. An additional exciting possibility is for new ways to enjoy live music concerts streamed to the home, particularly where they use games engines. Another possibility is to increase the feeling of presence in virtual meetings, so that remote participants sound as if they are in the room with you.

 Students would need a strong background in one or more of the following: Acoustics, Digital Signal Processing, AI and Deep/Machine Learning, Audio Engineering.

 

Dr William Smith, University of York.  Email William.smith@york.ac.uk

Project 1: Places that don't exist

Imagine playing a video game inside your favourite movie, with scenes from the movie exactly recreated in all their detail. Or playing a game at a historical site, building or city that has since been destroyed, with photorealistic appearance as it would have appeared. The goal of this project is to combine state-of-the-art 3D computer vision and procedural content generation to create game-ready scene models and assets from movies, contemporary photos, plans or works of art. 3D reconstruction techniques such as structure-from-motion or deep monocular depth estimation can be used to reconstruct raw models of the observed part of the scene. Deep learning based methods will then be used to extrapolate and clean the models to produce complete scene layouts with photoreal textures.

Sample References:
https://github.com/skanti/scenecad
https://github.com/nianticlabs/monodepth2

 

Dr John R. Woodward, Queen Mary University of London.  Email j.woodward@qmul.ac.uk

Project 1: Automatically Designing Algorithms

We will automatically generate algorithms to be used in the context of games.

 

Dr John R. Woodward, Queen Mary University of London.  Email j.woodward@qmul.ac.uk

Project 2: AI to design gaming agents

We will design agents using ML to be used with games.

 

Dr John R. Woodward, Queen Mary University of London.  Email j.woodward@qmul.ac.uk

Project 3: Game Design

We will take a broad look at games to develop an architecture to construct new games