IGGI PhD Projects 2021

PhD projects offered for 2021:  If you are interested in any of the projects listed and would like further details and/or to discuss, please email the project supervisor. Thank you.

(List updated on 29 April 2021)

Professors Pat Healey and Greg Slabaugh, Queen Mary University of London. Email g.slabaugh@qmul.ac.uk

Project 1: Augmented Reality (AR)-Mediated Conversation (Project added 29/04/21).

This research project will explore the potential of Augmented Reality to mediate richer and more engaging forms of human-human interaction.  Using state-of-the-art augmented reality headsets, the research will explore how technology can enhance specific aspects of human conversation by enhancing verbal and non-verbal signals using automatically generated audio-visual cues derived from real-time sensing.  The research will explore the asymmetric (one “augmented” speaker) as well as the symmetric case, where both speakers have AI-prompts provided by headsets.  The system will be tested in various settings like conversational games (twenty questions) and multi-player AR computer games.  You will do basic research on human interaction combined with computer vision and AI.

 

Professor Sebastian Deterding, University of York. Email sebastian.deterding@york.ac.uk

Project 2: Designing for Curiosity in and Beyond Games (Project added 3/12/20).

Puzzles, mazes, half-hidden maps, mysterious characters, suspenseful stories: These are but some of the things that pique a player's curiosity and pull them deeper into a good game. Psychologists recognise that curiosity is a powerful human motive that drives learning and exploration. Yet surprisingly, researchers have paid comparatively little attention to how games evoke curiosity, and how we can design games or other interactive media, like museum exhibits, to optimally support it. This project aims to systematically explore ways in which we can identify and support how design evokes curiosity, and is open to a wide range of methodological backgrounds.

 

Professor Sebastian Deterding, University of York. Email sebastian.deterding@york.ac.uk

Project 3: Novelty Optimisation (Project added 3/12/20).

New levels, new characters, new items, new opponents: Novelty is a major game feature stoking sustained player curiosity and interest. Too much repetition, and players get bored. But is there such a thing as too much novelty? Games already do automatic difficulty balancing – finding just the right level of challenge. Can we do the same for novelty – identify and automatically balance the right amount of novel content we serve to players? This project would benefit from a computational methods background, such as computational psychology, cognitive science, machine learning, or procedural content generation, and an interest in player psychology.

 

Professor Sebastian Deterding, University of York. Email sebastian.deterding@york.ac.uk

Project 4: Theories of change in applied games (Project added 3/12/20).

Applied games, serious games, or gamified applications all try to change individuals and communities for the better, be it teaching a new language (as in Duolingo) or being more physically active (as in Pokémon Go). However, much work in the field is focused on whether a game or gamified app would be effective in ideal laboratory conditions. Thereby, it ignores important questions like: How do people even learn your game exists? And what would motivate them to even try it out? In other words, many applied games are missing what development workers and activists call a "theory of change": a fleshed-out, end-to-end plan how your intervention will actually produce change. This project is about systematically exploring what theories of change (if any) currently exist in applied gaming, and how we might make applied games more effective and efficient by using theories of change. It would benefit from a background in social science methods and design research.

 

Dr Ildar Farkhatdinov, Lecturer in Robotics, Queen Mary University of London. Email i.farkhatdinov@qmul.ac.uk

Project 1: Intelligent human-computer interaction for virtual reality games (Project added 1/12/20).

Virtual reality based games and applications require immersive human-computer interaction methods. In this project, you will explore development of novel interaction methods based on wearable sensing, robotic systems and human perception for applications in virtual reality games and training simulators. In particular, you will explore how new ways of navigation (walking/moving) in virtual scenes can be efficiently combined with multi-modal sensory feedback (visual, haptic, audio). The project will involve VR scene development, integration with hardware (sensors and haptic interfaces), experimental validation with human-participants and application to a game-based VR training simulator.

 

Dr Ildar Farkhatdinov, Lecturer in Robotics, Queen Mary University of London. Email i.farkhatdinov@qmul.ac.uk

Project 2: Gamification for human-operator training in telerobotics (Project added 1/12/20).

Telerobotics (remotely controlled robots) is widely used in applications in which humans cannot perform manipulation tasks (hazardous and remote environments). Training how to use such robots can be difficult. In this project you will develop novel training methodologies based on gamification techniques and virtual reality simulators and demonstrate their efficacy compared to the conventional training methods. The application areas may include telerobotic surgery or nuclear waste decommissioning.

 

Dr Ildar Farkhatdinov, Lecturer in Robotics, Queen Mary University of London. Email i.farkhatdinov@qmul.ac.uk

Project 3: Motivating exercising for wheelchair users with smartphone games (Project added 1/12/20).

There are currently no solutions available for wheelchair users to monitor their physical activity and support physical exercising to maintain fitness and wellbeing. You will work on developing a game-based wearable and smartphone solution which will track mobility of an individual wheelchair user and promote physical exercising when necessary. The developed system will provide means to exchange the data with other wheelchair users and their families to motivate physical activities and provide wellbeing support through gamification techniques. This project is a collaboration with the medical school.

 

Dr Ildar Farkhatdinov, Lecturer in Robotics, Queen Mary University of London. Email i.farkhatdinov@qmul.ac.uk

Project 4: Game theory to study animal-robot interaction and behaviour (Project added 1/12/20).

In this project you will use game theoretic approaches to design and analyse experiments to study animal behaviour and development. In particular, you will learn how to use gamification to build interactive robotic systems to run behavioural experiments and how to use game theory to model and understand the results. The outcome of the project will lead to better understanding of human and animal psychology. This project is collaboration with the experimental psychology department.

 

Dr Claudio Guarnera, University of York.  Email claudio.guarnera@york.ac.uk

Project 1: Investigating human colour constancy mechanisms by means of games and AR/VR

Colour constancy is the property of the Human Visual System that allows us to adapt to the illumination in a scene, and perceive surface colours as a constant, intrinsic property of real-world objects, even when the illumination changes. Current understanding of such mechanisms is limited, since it has been derived by means of experiments on flat, isolated patches on uniform background, very different from real-world scenarios. This project will make use of games and VR/AR to extend the understanding of colour constancy, by investigating at once the effect of several real-world cues used by the HVS, while observers are immersed in natural environments.

 

Dr Claudio Guarnera, University of York. Email claudio.guarnera@york.ac.uk

Project 2: Machine-learning based skin rendering for games

Simulating photo-realistic appearance of human face and skin plays a fundamental role in videogames. In fact, human perception creates a strong revulsion toward things that appear almost human, but not quite: a phenomenon named the “uncanny valley”.
Human skin has a complex structure, with several layers containing blood vessels, connective tissues, etc. Furthermore, skin colour changes with time, and depends on both the emotional and physical state. Therefore, it difficult to simulate realistic skin appearance in real-time.
This project will extend the state of the art on real-time facial skin rendering for games, in both static and dynamic conditions, relying on machine-learning techniques.

Links: https://pure.york.ac.uk/portal/en/publications/practical-measurement-and-reconstruction-of-spectral-skin-reflectance(74f105b8-c195-45da-b1fa-b55e3b0129b6).html

 

Dr Claudio Guarnera, University of York. Email claudio.guarnera@york.ac.uk

Project 3: Consistent material appearance in game development tools

Material appearance of virtual objects depends on the underlying material model implementation in rendering software and game development packages. Digital 3D assets for games evolve through collaboration among several teams and it is common to use many different 3D tools. A lack of standards to exchange material parameters and data between rendering tools means that artists in digital 3D prototyping for games often have to manually match the appearance of materials to a reference, by tweaking available parameters. This process is time consuming and error prone. This project will focus on automatic solutions to enhance digital creativity, by providing consistent material appearance across different rendering tools and material models.

Links: https://pure.york.ac.uk/portal/en/publications/perceptually-validated-crossrenderer-analytical-brdf-parameter-remapping(3add34b8-5962-40d0-8106-e1e8d37659aa).html

 

Dr Patrik Huber, University of York. Email patrik.huber@york.ac.uk

Project 1:  Creating 3D face avatars of players for immersive playing and social experiences

This project aims to use computer vision and machine learning techniques to automatically create 3D face avatars of players from images or a video of the person. The student is expected to have a strong background and interest in computer vision, deep learning, computer graphics, and linear algebra.
Further reading:
- https://onlinelibrary.wiley.com/doi/abs/10.1111/cgf.13382
- https://dl.acm.org/doi/abs/10.1145/3395208

 

Dr Patrik Huber, University of York. Email patrik.huber@york.ac.uk

Project 2: Using automatic face analytics for professional e-sports

This project aims to use and develop automatic face analysis techniques to analyse professional e-sports players while they are performing live matches or training. The aim is to use this data to analyse players’ emotional and physiological behaviour to then suggest how a player’s performance could be improved. The student is expected to have a strong background and interest in computer vision, deep learning, and linear algebra.

 

Dr Anne Hsu, Queen Mary University of London. Email anne.hsu@qmul.ac.uk

Project 1: AI for Coaching Difficult Communication and Conflict Resolution

This project builds on an existing AI that coaches people on language that is useful for handling difficult conversations, giving feedback, and conflict resolution. Currently, it is incorporated into an online training course, but there are many opportunities to expand this in larger, game-related contexts.

 

Dr Jo Iacovides, University of York. Email jo.iacovides@york.ac.uk

Project 1: Persuasive games: the role of emotion

Games and gamified techniques are increasingly being used for persuasive purposes such as changing people’s attitudes and behaviours. Though many interventions indicate short term benefits, questions remain about how to design playful approaches that have long lasting effects. In particular, the role of emotion in these experiences is not well understood. While recent work in HCI and games has highlighted how gameplay can involve a range of complex emotions, less is known about what sorts of emotional response are able to stimulate and sustain persuasive effects over the long-term. The research could involve the use digital technologies, augmented reality and/or virtual reality. Potential domains include environmental or health-related behaviour change.

 

Dr Lorenzo Jamone, Queen Mary University of London.  Email l.jamone@qmul.ac.uk

Project 1: Tactile interaction with Virtual Reality content. (Project added 23/11/20).

Most feedback in current VR applications is visual. But what if you could "touch" and "feel" everything you see in VR? Tactile videogames, tactile internet, tactile TV! Feeling the texture of virtual objects, understanding whether they are hard or soft, making it easier to pick them and move them around. In this project the student will explore the use of vibrating motors distributed over the human hand (e.g. using a wearable glove) to give tactile feedback about the physical interactions happening in a Virtual Reality. The project will require a very basic knowledge of electronics and good programming skills.

 

Dr Lorenzo Jamone, Queen Mary University of London.  Email l.jamone@qmul.ac.uk

Project 2: Artificial creativity: the creation and use of new tools. (Project added 23/11/20).

The ability to create and use tools is one of the most striking manifestations of animal intelligence. Indeed, the creation of new tools has marked the evolution of the human species over history, being one of the most important factors helping humans to become the dominant species on earth. But what are the main cognitive processes underlying this special kind of creativity? And how can they be reproduced in an artificial agent? In this project the student will explore the psychology literature on animal and human tool use, and will develop a computational model that supports "tool innovation": the creation and use of a novel tool for a given task (e.g. creating the wheel!). The project will require very good programming skills, and possibly some background in machine learning and AI, in addition to a keen interest in human and animal cognition.

 

Dr Lorenzo Jamone, Queen Mary University of London.  Email l.jamone@qmul.ac.uk

Project 3: Serious games: a tactile Rubik's cube. (Project added 23/11/20).

 Imagine the classic Rubik's cube. But this time to solve the cube you do not need to rotate the faces; you "just" need to touch the coloured squares. But you need to do it in the right way! To win the game, the user should discover the correct "tactile pattern" (i.e. how to touch the cube); when the correct pattern is discovered, the user must "remember it" for a few more trials; then, a new pattern must be discovered. In this project the student will develop the AI of this tactile game: what novel patterns to provide to the user in order to train specific cognitive abilities, e.g. memory, attention, problem solving. Existing brain games are mostly visual... but this one will be different!  The "hardware" will be a 3D printed sensorized cube that can collect tactile and motion data and send them to a PC/smartphone app via wireless connection (e.g. Bluetooth). The project requires basic electronics skills and very good programming skills, ideally with some background in machine learning and AI.

 

Dr Pengcheng Liu, University of York.  Email pengcheng.liu@york.ac.uk

Project 1: Intelligent Human-Adaptation for Mixed-Reality Rehabilitation Games (Project added 7/12/20).

Learning can be enabled and reinforced in serious games by building in the rewards, including goals, narratives, rules, multisensory cues and interactivity. In rehabilitation, computational intelligence can provide significant benefits to patients when applied in virtual reality. However, adaptation to meet the changing requirements of patients and therapists, as a key requirement in rehabilitation games, is still an research area that needs lot of attentions, especially for mixed-reality rehabilitation games. This project aims to design an intelligent human-adaptation system which incorporates adaptation to patient's capability, exercise difficulty level and functional status, and supplement the therapist’s input with a virtual therapist.

 

Dr Pengcheng Liu, University of York.  Email pengcheng.liu@york.ac.uk

Project 2: Transferring from Games to Real Robot (Project added 7/12/20).

Deep reinforcement learning is a prevailing approach for a robot to learn tasks by training the policy in simulated environments, and then transfer learning can be used to learn the final policy on the real-world robot. However, accurately simulating real-world is hard, and data obtained from simulation may not be directly applicable to real robot – a reality gap problem. Video games can be a feasible solution. This project aims to explore how general video games can be directly used instead of fine-tuned simulations for the sim-to-real transfer, more interestingly, how the agent can autonomously learn the new action space if mis-match exists between game and robot actions.

 

Dr Pengcheng Liu, University of York.  Email pengcheng.liu@york.ac.uk

Project 3: Sim-to-Real: Deep Reinforcement Learning for Autonomous Driving in Mixed/Virtual Reality Setting (Project added 7/12/20).

Autonomous driving promises a safe, comfortable and efficient driving experience. There are many challenges such as navigation in constrained environment and unpredictable interactions among vehicles. Deep reinforcement learning is one of the promising approaches that can autonomously generate intelligent driving policies to cope with these challenges. Nevertheless, learning safely becomes difficult in dynamic and crowded scenarios, especially when collision avoidance is essential to be included in the learning process. This project aims to develop a intelligent system in mix/virtual reality setting that enables the learning of driving policies for autonomous vehicles operating in a dynamic and shared scenario.

 

Dr Pengcheng Liu, University of York.  Email pengcheng.liu@york.ac.uk

Project 4: Virtual/Mixed Reality Training System (Games) for Intelligent Human-Robot Collaboration (Project added 7/12/20).

This project aims to design a highly interactive and immersive Virtual/Mixed Reality Training System in the form of serious games that simulates in real-time the cooperation between industrial robotic manipulators and humans, executing simple manufacturing tasks. The project will involve exploring the interaction techniques used to facilitate implementation of virtual human-robot collaboration. In the virtual setting, physical safety issues such as contacts and collisions will be considered, more importantly, mental safety is crucial as the human operator should be given augmented situational awareness and enhanced perception of the robot’s motion. The developed system can be used for investigating the acceptability of human-robot collaboration.

 

Dr Pengcheng Liu, University of York.  Email pengcheng.liu@york.ac.uk

Project 5: Designing for Health and Safety in VR/MR Games (Project added 7/12/20).

VR/MR treatment (games) is expected to be used when it is prescribed by the appropriate clinical expert. However, using VR/MR rehabilitation games, as an emerging technology, can place patients at potential risks during the process of self-diagnosis, self-assistance and self-treatment. Besides, VR/MR games may cause problems in the cognitive organizations, human experiences, memories, judgements and distinguishing between themselves and the environment. This project aims to explore ways in which we can create, implicate and evaluate health and safety parameters in the designing of VR/MR games.

 

Professor Simon Lucas, Queen Mary University of London. Email simon.lucas@qmul.ac.uk

Project 1: Game AI for Real-World Decision Making

Recent progress in Game AI has demonstrated that given enough data from human gameplay, or experience gained via simulations, machines can rival or surpass the most skilled human players in classic games such as Go, or commercial computer games such as Starcraft.

The aim of this project is to understand how game AI could be applied to improve real-world decision making, both by building better simulation models and developing AI that’s better suited to messy real-world situations.

For more details see Goodman, Risi and Lucas (2020) https://arxiv.org/abs/2009.08922

 

Professor Simon Lucas, Queen Mary University of London. Email simon.lucas@qmul.ac.uk

Project 2: Hierarchical Statistical Planning for Game AI

Statistical Forward Planning algorithms such as Monte Carlo Tree Search and Rolling Horizon Evolution often perform amazingly well across a range of games. However, in some cases the action-space of a game is low-level and requires long action sequences in order to achieve meaningful effects, causing particular difficulties when the reward landscape is flat. A possible solution is to form plans in a higher level or macro action spaces. The aim of this project is to further the state of the art in this area and demonstrate progress on a range of challenging games.

 

Dr Fiona McNab, University of York.  Email fiona.mcnab@york.ac.uk

Project 1: Understanding age-related changes in cognition using games

With data collected with smartphone games we have uncovered interesting changes in the way we hold information in mind as we age: https://www.pnas.org/content/pnas/112/20/6515.full.pdf  This raises many interesting questions about the nature of these changes, why they seem to be greater for some individuals compared to others, and how these changes might affect cognition.

 

Dr Fiona McNab, University of York.  Email fiona.mcnab@york.ac.uk

Project 2: Cognitive training using games

Cognitive training is a controversial topic, but some properly controlled scientific studies have given promising results. However, positive findings in older adults have been particularly limited. Using insights about the nature of cognitive change associated with healthy ageing to develop scientifically-informed training may be the answer.

 

Dr Fiona McNab, University of York.  Email fiona.mcnab@york.ac.uk

Project 3: Understanding the limitations of working memory and the role of attention using games

Our ability to hold information in mind for a short time (working memory) is vital for daily life. Working memory capacity is limit, and varies between individuals. Our recent work, using data collected with smartphone games, has identified two potential bases for our limited working memory capacity, which appear to involve separate mechanisms:

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4035130/pdf/xhp_40_3_960.pdf
https://www.pnas.org/content/pnas/112/20/6515.full.pdf

This raises many questions about the nature of these mechanisms and their contribution to cognition.

 

Professor Ioannis Patras, Queen Mary University of London. Email i.patras@qmul.ac.uk

Project 1: Multi-modal non-verbal Behaviour Analysis in the context of games

This project aims at developing Machine Learning methodologies for the analysis at various levels of human behaviour in the context of computer gaming. This involves the recognition of the player’s affective state, engagement, perception of difficulty and performance based on the analysis of facial expressions, body gestures and possibly physiological signals from wearable sensors. The focus will be on Deep Learning architectures and in particular on ways of learning in the presence of noise, uncertainty and lack of large amounts of data, inter-personal and quickly adaptable models.

Please, look at related publications at:
https://scholar.google.co.uk/citations?user=OBYLxRkAAAAJ&hl=en (in particular works with Wenxuan Mou, Yang, Koelstra).

 

Dr Philip Quinlan, University of York. Email philip.quinlan@york.ac.uk

Project 1: Reading and gaming

The aim of the project is to harness the power of electronic hand-held devices (i.e., smartphones, tablets) to develop evidenced-based interactive software (an app) to facilitate reading in boys who are struggling to learn to read. The basic idea is to embody simple reading tasks in the context of an interactive game. Part of the novelty will be in using spoken word recognition tasks.

 

Dr Paulo Rauber, Queen Mary University of London. Email p.rauber@qmul.ac.uk

Project 1: Principled and Scalable Exploration Techniques for Reinforcement Learning (Project added on 23/11/20).

Reinforcement learning has received significant attention due to its success in training agents that play popular games such as Go, Starcraft II, Dota 2, and others. Inefficient exploration, one of the earliest problems recognized in the field, still limits the success of reinforcement learning approaches that do not require domain knowledge. Although techniques like posterior sampling convincingly solve hard exploration problems in simple domains (https://searchworks.stanford.edu/view/11891201), scalable exploration techniques remain elusive. In this project, you will develop principled and scalable exploration techniques based on reducing model uncertainty (https://arxiv.org/abs/1609.04436).

 

Professor Josh Reiss, Queen Mary University of London.  Email Joshua.reiss@qmul.ac.uk

Project 1: Machine learning of procedural audio

Game sound design relies heavily on pre-recorded samples, but this approach is inflexible, repetitive and uncreative. An alternative is procedural audio, where sounds are created in real-time using software algorithms. But many procedural audio techniques are low quality, or tailored only to a narrow class of sounds. Machine learning from sample libraries to select, optimise and improve the procedural models, could be the key to transforming the industry and creating procedural auditory worlds. This work will build on recent high impact research from the team to investigate whether procedural audio can fully replace the use of pre-recorded sound effects.

See http://fxive.com for examples of procedural sound effects

 

Professor Josh Reiss, Queen Mary University of London.  Email Joshua.reiss@qmul.ac.uk

Project 2: Exploiting game graphics rendering for sound generation

Procedural content generation supports creation of rich and varied games, but sound design has not kept pace with such innovation. Often the visual aspects of every object in the scene may be procedurally rendered, yet sound designers still rely on pre-recorded sample libraries. However, much of the information required to determine the sounds is already there. The size, shape, material and density of objects has been set in order to determine how they are rendered. This topic explores how existing animation information, available in the Game Engine, may be used to generate the sounds produced when objects interact.

 

Professor Josh Reiss, Queen Mary University of London.  Email Joshua.reiss@qmul.ac.uk

Project 3: Impossible physical models

Games often create and simulate worlds where interaction in the game is driven by physics. But what if the rules of physics were different? Imagine if the speed of light was slowed, gravity was not constant, liquids had different viscosities and materials had different elasticities. This topic will explore how to create authentic simulations of worlds with unreal physical properties, and use them in a game context.

 

Professor Josh Reiss, Queen Mary University of London.  Email Joshua.reiss@qmul.ac.uk

Project 4: Automatic mixing for game audio

Recent years have seen tremendous growth in intelligent systems that can mix and produce multitrack music content without the need for human intervention. Game content, with a huge potential number of audio sources, suffers from masking, poor intelligibility, and a lack of clarity and focus. It would clearly benefit from a ‘robot sound engineer’ inside the games console, manipulating content based on the interaction between sound assets. But the rules for game content are quite different from music, e.g spatial positioning is dictated by the game play. This topic will explore and evaluate intelligent systems to automatically mix game audio.

See B. De Man, J. D. Reiss and R. Stables, 'Ten years of automatic mixing,' 3rd Workshop on Intelligent Music Production, Salford, UK, 15 September 2017.

https://www.eecs.qmul.ac.uk/~josh/documents/2017/WIMP2017_DeManEtAl.pdf

 

Dr Søren Riis, Queen Mary University of London.  Email s.riis@qmul.ac.uk

Project 1: Information theory and combinatorics for deep-learning

Several breakthrough developments in deep-learning have created a surge in applied AI. The theoretical frameworks supporting these advances are lagging behind. The aim is to use practical implementation of experimental models to develop further the theory of one of the following topics:

Entropy for autoregressive language models, Information transfer and transfer learning, Joint information in a multi-agent system, Active learning and adaptive versus non-adaptive learning models, Information theory and generative adversarial network, Entropy and games AI, Machine learning and compression measures, Cross-entropy measures, Entropy measures for decision forest, Combinatorics and learnability.

 

Dr Søren Riis, Queen Mary University of London.  Email s.riis@qmul.ac.uk

Project 2: Autoregressive language models for automated theorem proving

One long term challenge is to apply deep learning to mathematics.
The project is to explore the application of transformer-based language models to theorem proving. The successful student needs to have a strong background in mathematics. The student should expect to spend around three months learning a suitable software for automated theorem proving, and learn to run state of the art libraries for transformer-based language models. The project is expected to focus on a specific sub-area of mathematics, e.g. within Algebra, Analysis or Combinatorics. For pointers to recent relevant work see https://arxiv.org/abs/2009.03393

 

Dr Søren Riis, Queen Mary University of London.  Email s.riis@qmul.ac.uk

Project 3: Development of artificial language through cooperative multi-agent reinforcement learning

The proposed research focuses on the development of new Multi-agent reinforcement learning algorithms for solving cooperative tasks via communication. It aims at combining ideas from recent research on the topic with hierarchical reinforcement learning to create a framework for artificial language learning. This research is partly motivated by the challenge of developing and investigating the interaction between humans and intelligent agents.

 

Dr Søren Riis, Queen Mary University of London.  Email s.riis@qmul.ac.uk

Project 4: Deep learning in Chemoinformatics

Graph transformation forms a natural model for chemical reaction systems and provides a sufficient level of detail to track individual atoms.  This project is expected to be in Collaboration with Professor Jotun Hein (Oxford) and me. RDKit is a powerful tool for investigating chemoinformatics, and this research aims at combining deep learning models with mathematical models of chemistry. Some interest in Chemistry is desirable by not essential. Good programming skills (Python) are imperative.

 

Professor Mark Sandler, Queen Mary University of London.  Email mark.sandler@qmul.ac.uk

Project 1: Intelligent Virtual Acoustics

In game play, it is important for the realism of the auditory experience to match the realism of the visual experience. This project will explore ways to create convincing acoustic environments with low computational resource, to be able to render acoustic scenes in consumer-grade devices.

Following recent work by a PhD student, this research will use Scattering Delay Networks to efficient model acoustic spaces [1], and a combination of VBAP (Vector-based Amplitude Panning) [2] and binaural audio rendering to deliver the experience to the player. Currently, the technology can only deliver static experiences, so this project will explore enhancing performance to cover 6 degrees of freedom (6DoF) [3] which means the player can move not only their head but relocate the body, either physically or virtually, to move from room to room. What’s really important is to create fast, smooth, realistic movement between rooms and from indoor to outdoor scenarios.

Students would need a strong background in one of more of the following: Acoustics, Digital Signal Processing, games engines, user interaction design.

[1] E. De Sena, H. Hacιhabiboğlu, Z. Cvetković and J. O. Smith, "Efficient Synthesis of Room Acoustics via Scattering Delay Networks," in IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 23, no. 9, pp. 1478-1492, Sept. 2015, doi: 10.1109/TASLP.2015.2438547.
[2] V. Pulkki, "Virtual Sound Source Positioning Using Vector Base Amplitude Panning," J. Audio Eng. Soc., vol. 45, no. 6, pp. 456-466, (1997 June.).
[3] Llewellyn, Gareth and Paterson, Justin (2020) Towards 6DOF: 3-D Audio for virtual, augmented and mixed realities. In: 3D Audio. Perspectives on Music Production. Routledge, NYC, USA. (In Press)

 

Professor Mark Sandler, Queen Mary University of London.  Email mark.sandler@qmul.ac.uk

Project 2: Virtual Placement of Objects in Acoustic Scenes

As Augmented Reality experiences are growing in importance, and the cost of the technology falls, it is increasingly of interest to develop advanced ways to insert “auditory objects” within mixed virtual-real scenes so that they interact acoustically with their environment exactly as if they were physically present. Examples of auditory objects include musical instruments, humans speaking, gun shots, and so on. This approach has the potential to increase immersion in films, games and music, wherever and however they are consumed – headphones, earbuds, stereo, 5.1 etc.

A significant difference compared to current approaches is that the virtual objects will have realistic dispersion characteristics and will interact acoustically (think: reverberation) as if they are really present in the physical space they are being rendered into. This will increase the engagement of players in Augmented Reality gaming by making the auditory experience more indistinguishable from physical reality. An additional exciting possibility is for new ways to enjoy live music concerts streamed to the home, particularly where they use games engines. Another possibility is to increase the feeling of presence in virtual meetings, so that remote participants sound as if they are in the room with you.

 Students would need a strong background in one or more of the following: Acoustics, Digital Signal Processing, AI and Deep/Machine Learning, Audio Engineering.

 

Dr William Smith, University of York.  Email William.smith@york.ac.uk

Project 1: Places that don't exist

Imagine playing a video game inside your favourite movie, with scenes from the movie exactly recreated in all their detail. Or playing a game at a historical site, building or city that has since been destroyed, with photorealistic appearance as it would have appeared. The goal of this project is to combine state-of-the-art 3D computer vision and procedural content generation to create game-ready scene models and assets from movies, contemporary photos, plans or works of art. 3D reconstruction techniques such as structure-from-motion or deep monocular depth estimation can be used to reconstruct raw models of the observed part of the scene. Deep learning based methods will then be used to extrapolate and clean the models to produce complete scene layouts with photoreal textures.

Sample References:
https://github.com/skanti/scenecad
https://github.com/nianticlabs/monodepth2

 

Dr John R. Woodward, Queen Mary University of London.  Email j.woodward@qmul.ac.uk

Project 1: Automatically Designing Algorithms

We will automatically generate algorithms to be used in the context of games.

 

Dr John R. Woodward, Queen Mary University of London.  Email j.woodward@qmul.ac.uk

Project 2: AI to design gaming agents

We will design agents using ML to be used with games.

 

Dr John R. Woodward, Queen Mary University of London.  Email j.woodward@qmul.ac.uk

Project 3: Game Design

We will take a broad look at games to develop an architecture to construct new games