Although applicants are welcome to submit their own proposal ideas, our academics have also provided a list of targeted projects they are seeking PhD students to work on. If you are interested, you can directly contact the relevant supervisor to discuss these opportunities:
Semantic understanding of TV programme content and structure to enable automatic enhancement and adjustment [Supervisors: Prof Mark Sandler & Prof Andrea Cavallaro]. The move from broadcast television to IP delivered television offers up an opportunity to move from offering predefined programmes that are experienced the same way by all viewers to offering more dynamic media experiences where the programme can be personalised and tailored to the viewer’s environment and preferences. For example, the length of a programme could be reduced to fit the time that a user has available, e.g. to watch a 1-hour documentary on a 25-minute train journey, to remove scenes of violence from a drama or to alter the sound levels for viewers who are hard of hearing. To be able to perform this personalisation we must have an understanding of the structure of the programme – the relative importance and inter-dependence of elements of a programme, where those elements could be sounds, shots, scenes etc. – so that any alterations to the programme don’t affect the viewer’s understanding and enjoyment. To build up these structures could put extra effort onto the production team – especially with archive material that has already been produced – so the aim of this project is to investigate ways to automatically extract these type of structures and use them to drive more dynamic media experiences.
Improving forum comment moderation tools [Supervisor: Dr Gareth Tyson]. The BBC provides many forums which allow the public to comment on news stories and BBC content, covering an extremely broad range of topics. Most of these forums are reactively moderated, meaning that comments are only checked by moderators when a filter indicates a problem or when forum users report an issue. This project will explore whether Artificial Intelligence techniques could provide improved tools to support moderators, as discussed in a recent Ofcom report. The goal would be to identify issues such as bullying, racism and misogyny, in addition to more obvious cases of abuse. Moderators are also keen to identity inappropriate disclosures, mental health issues and safeguarding concerns. Reactive moderation raises a particular challenge for a supervised learning approach because there is a significant False Negative rate in the data. State-of-the-art algorithms do not appear to address this issue.
Media engineering for hearing-impaired audiences [Supervisor: Prof Josh Reiss]. This research proposes the exploration of ways in which media content can be automatically processed to deliver the content optimally for audiences with hearing loss. It builds on prior work by the collaborator, BBC, in development of effective audio mixing techniques for broadcast audio enhancement. It will form a deeper understanding of the effects of hearing loss on media content perception and enjoyment, as well as utilize this knowledge towards the development of intelligent audio production techniques and applications that could improve audio quality by providing efficient and customisable compensation. It aims to advance beyond current research, which does not yet fully take into account the artistic intent of the material, and requires an ‘ideal mix’ for normal hearing listeners. So a new approach that both removes constraints and is more focused on the meaning of the content is required. This approach will be derived from natural language processing and audio informatics, to prioritise sources and establish requirements for the preferred mix.
Understanding audiences in social networks [Supervisor: Dr Ignacio Castro]. This PhD project will focus on developing Data Science and Machine Learning techniques to better understand the role that specific audiences (e.g. forums or social networks) have on the wider global Internet audience. This will, for example, focus on understanding the ecosystem effects of how social media users drive access to third party content. The project will involve correlating these behaviours with activities of social platforms. Our social network datasets currently include both mainstream social networks (e.g., Twitter) as well as fringe communities (e.g., Mastodon). The PhD project will study the dynamics of content popularity within and across datasets to identify the drivers of popularity, the spread of specific types of content (e.g., online harm related content) within/across social networks and clusters of users within as well as its impact on the general Internet popularity.
Predicting demographics, personalities, and global values from digital media behaviours [Supervisor: Dr Charalampos Saitis]. Digital platforms of media streaming, including radio and podcasts, TV and videos, offer access to behavioural signals that can be used to learn about the characteristics and preferences of individuals. The goal of the proposed research is to leverage the power of trails of digital media behaviours (most frequently listened to genres/artists, how often one listens to new music/artist suggestions, playlists, etc.). Such knowledge can then be employed to build (i) predictive models of complex high-level psychological constructs like moral and human values as well as on complex demographic attributes; (ii) multimodal personalised recommendation systems. The data informed models developed in this project will help unlock the potential of personalised, uniquely tailored user experiences, recommendation systems, and communication strategies in digital media—a strategic focus area of the UKRI-EPSRC portfolio—with applications in the creative industries and healthcare in particular (e.g., understanding well-being from musical choices). This project is in collaboration with Dr Kyriaki Kalimeri and the Data Science Lab at ISI Foundation (Turin, Italy; https://www.isi.it/en/home)
Protecting audiences from online harms on social media [Supervisor: Dr Gareth Tyson]. Content moderation for online audiences has become a major challenge for large social media organisations, such as Facebook, YouTube and Twitter. The ability for users to upload and contribute content (e.g. text, videos, audio) mean that these platforms can be used for various types of harmful or even illegal activities, e.g. hate speech, cyberbullying, sharing of illegal content. Due to this, huge content moderation teams are employed to monitor content uploads and check if they should be removed for their audiences. This PhD project will consist of two major themes: (1) Collecting and analysing large-scale datasets that reveal how web platforms perform existing content moderation, to evaluate their successes and limitations; then (2) To build on these datasets to develop new Machine Learning techniques for automated content moderation.
Intelligent systems for radio drama production [Supervisor: Prof Josh Reiss]. This research topic proposes methods for assisting a human creator in producing radio dramas. Radio drama consists of both literary aspects, such as plot, story characters, or environments, as well as production aspects, such as speech, music, and sound effects. This project builds on recent, high impact collaboration with BBC, to greatly advance the understanding of radio drama production, with the goal of devising and assessing intelligent technologies to aid in its creation. The project will first be concerned with investigating rules-based systems for generating production scripts from story outlines, and producing draft content from such scripts. It will consider existing workflows for content production and where such approaches rely on heavy manual labour. Evaluation will be with expert content producers, with the goal of creating new technologies that streamline workflows and facilitate the creative process.
Smartphone audio-visual sensing networks [Supervisor: Dr Lin Wang]. With the prevalence of personal equipment like smartphones and laptops, it often occurs that many people record the same social event with their personal devices (e.g. public talk, music concert). However, due to undesired field of view, the influence of environmental noise and room reverberation, the audio and visual recording at individual devices is usually of poor quality. In fact, the microphones and cameras embedded in multiple devices can be used to construct an ad-hoc audio-visual sensing network. The project mainly aims to develop novel audio-visual signal processing and machine learning algorithms that can exploit the recordings from multiple smartphones to improve the sound quality and to generate desirable audio and video content. 1) Enhanced audio content generation: to exploit the spatial information captured by distributed smartphones for acoustic scene analysis and target speech extraction in noisy and adverse environments, and to render spatialized audio that gives listeners an immersive perception. 2) Joint audio-visual content generation: to spatially and temporally synchronize the audio and visual information captured by distributed smartphones and to render enjoyable multi-view and immersive multimedia presentation.
Claim detection for fact-checking [Supervisor: Dr Arkaitz Zubiaga] Fact-checking is an increasingly important process to prevent diffusion of inaccurate claims by politicians and other influential people, as well as to keep society informed on social issues including politics, economy or health. Given the volume of data to monitor across sources, one of the key challenges for fact-checkers is the identification and prioritisation of claims to be checked. This PhD project will focus on the development of automated methods to detect and prioritise these claims from multiple sources, focusing on NLP.
Understanding the dynamics of live audiences [Supervisor: Prof Pat Healey] A key part of the lived experience is the social interactions with and between audience members. These interactions are mediated by a rich variety of behaviours cheers, jeers, booing, laughing, coughing, fidgeting. One interesting area for research is how we can sense these behaviours and what they tell us about audience engagement, ‘hotspots’ and the concept of liveness. A second interesting area is how they can be reproduced in remote interactions – how do we recreate the sense of ‘being there’ in virtual or mediated live interactions.
Vibrotactile haptic feedback for fully immersive media content [Supervisor: Dr Lorenzo Jamone] Think about watching MasterChef, while comfortably sitting on your sofa, and being able to “touch” and “feel” the ingredients Gordon Ramsay is using; think about feeling the exact thickness of that chocolate mousse while Gordon Ramsay is stirring it and explaining how it should be done. Sweet! This kind of “Tactile TV” is the media of the future. Extending preliminary work of the CRISP team at QMUL, in the context of robotic teleoperation and Virtual Reality, the student will develop solutions to provide users with vibrotactile feedback that brings information about the stiffness and texture of remote objects and environments, using affordable wearable wireless technologies.