XClose

UCL Department of Security and Crime Science

Home
Menu

Project proposals

As part of your application to the EPSRC CDT in Cyber-Physical Risk, you will have the opportunity to select a cutting-edge research project that aligns with your interests and expertise.

A glowing, holographic circuit board with intricate lines and patterns displayed against a dark background, illuminated in shades of blue and green, creating a futuristic technological appearance.

Below are project outlines from leading academics in UCL Security and Crime Science, UCL Computer Science, and other key departments. These projects tackle critical challenges in cyber-physical security and crime prevention, including AI-driven risks, extremist content’s role in offline violence, and securing smart homes and industrial networks.

Unless you are applying for the AlbionVC-supported open-topic pathway [more details coming soon], you must select one of the projects shortlisted by the CDT research committee.

Before choosing a project, we encourage you to explore the authors' research and contact them directly to discuss your interest and suitability for the project. Further details on this process are provided in the application guidelines.

Each proposal below offers a unique opportunity to work on high-impact challenges with leading experts. Browse the list below, and if you have any questions, contact us at scs-admissions@ucl.ac.uk.

Social media, crime and the environment 

Environmental policies and legislation are emerging as key sources of political division, often sparking anti-social behaviour and criminal acts, such as the destruction of ULEZ cameras, vandalism of museum paintings, and conflicts between drivers and cyclists. This project investigates how digital communication tools, particularly social media platforms, may facilitate, enable, and legitimize environmentally driven crime and anti-social behaviour. The research will analyse social media content to provide insights into these dynamics and develop strategies to prevent and mitigate the risks associated with digital facilitation of crime and disruptive actions.

Cyber-physical security in Vision-Language-Action Models for autonomous systems 

Vision-Language-Action Models (VLAMs) are transforming autonomous systems, such as self-driving cars and robotic manipulators, by integrating multimodal decision-making. However, their dependence on visual and linguistic inputs makes them vulnerable to hybrid cyber-physical attacks, posing significant risks to safety-critical applications. This research aims to identify these vulnerabilities, exploring adversarial attacks on VLAMs and developing robust defense mechanisms. The project aims to contribute to enhancing security and resilience of AI-driven transportation and automation systems by designing hybrid attack scenarios and creating defense strategies. 

Enhancing crime detection and investigation in the Internet of Things era

The rise of IoT-enabled crimes poses new challenges for cybersecurity and law enforcement. This project focuses on two key threats: IoT-dependent crimes, where devices are directly targeted (e.g., ransomware, botnets, sabotage), and IoT-facilitated crimes, where vulnerabilities enable traditional offenses (e.g., cyberstalking, identity theft). Students will develop expertise in IoT security, digital forensics, and crime prevention strategies, contributing to new forensic tools, vulnerability databases, and crime simulation testbeds. They will also explore legal and ethical considerations of digital evidence, gaining hands-on experience to address complex security challenges at the intersection of technology and law enforcement. 

Transfer learning for threat detection and mitigation in cybersecurity 

Cyberattacks, including zero-day exploits and advanced persistent threats (APTs), pose increasing challenges to traditional security measures. This project will develop advanced machine learning solutions, focusing on transfer learning, to improve real-time threat detection, prediction, and mitigation in cybersecurity. Applications include intrusion detection systems (IDS), malware analysis, and proactive response frameworks to protect critical infrastructure. The research will bridge theoretical machine learning with practical cybersecurity challenges, enhancing digital resilience against emerging threats, such as zero-day attacks, and facilitate knowledge transfer across different attack scenarios. The goal is to develop scalable, adaptable AI solutions for real-time cybersecurity defense.  

  • Supervisor: Prof Benjamin Guedj (UCL Computer Science, Centre for Artificial Intelligence) and a secondary supervisor (to be confirmed) 
  • Research themes:  Cyber-Physical Systems 
Risk assessment and mitigation of threats to AI enabled devices in cyber-physical-social systems

AI and Machine Learning (ML) are increasingly used in automated decision-making and security, such as in smart homes and threat detection. However, these technologies create new vulnerabilities, exposing AI devices to “adversarial attacks” that threaten security and privacy. While AI security has been studied in digital environments, its impact on cyber-physical-social systems (CPSS), including smart homes and industrial IoT, remains underexplored. This project aims to develop risk assessment methods for AI-enabled devices in CPSS, providing insights for safer AI-driven solutions and supporting AI policymakers and regulators. 

Investigating digital supply chain attacks in digital twins and developing solutions 

Digital twins are increasingly integrated into critical sectors like utilities, energy, and transportation. They rely on complex systems like AI-driven analytics and industrial control technologies, making them vulnerable to cyberattacks that could cause physical harm. These attacks can disrupt operations, cause safety hazards, and result in economic losses. This research aims to investigate supply chain vulnerabilities and propose solutions to secure digital twins, ensuring their resilience in sectors that are critical for daily life.

Talk is cheap? Assessing how extremist content online can promote violence offline to identify countermeasures 

Extremist content online can incite offline violence targeting minority groups and government officials. Despite various countermeasures, evidence on how online exposure leads to offline violence is limited. This research will explore how such content affects real-world violence, using an interdisplinary approach from social sciences and computer sciences, and analysing data from social media platforms and open sources. Computational linguistic tools like GPT and human-participant experiments will be applied to investigate dynamics not easily observable in text. The research will culminate in an agent-based model to simulate how online and offline interventions reduce the risk of violence. 

Computational threat assessments: the relationship between online threats and real-world action 

Governments and law enforcement agencies rely on effective threat assessment tools to address terrorism, mass shootings, and violence, including predicting if online threats will escalate into real-world actions. This project aims to advance AI and natural language processing (NLP) techniques, particularly in sentiment analysis and anomaly detection, to improve predictive models for threat assessment. The research includes collecting and cleaning data from various sources, including threats to industry organisations, the Royal Family, and Members of Parliament. It will also address the ethical use of these tools to prevent misuse or discrimination.

Attack and defence of cyber-physical systems relying on multimodal foundational models

Multimodal foundational models that integrate data from text, images, audio, and video have revolutionized various applications, including autonomous vehicles and healthcare systems. However, these models are increasingly vulnerable to sophisticated cyber-attacks that can extend into the physical world, posing risks to critical systems. This project seeks to identify vulnerabilities, develop defense mechanisms, and analyse the impact of virtual attacks on real-world systems like robotics and other cyber-physical systems. With this, it aims to enhance security and robustness in multimodal foundational models, addressing the growing concern of cyber-physical risks in critical applications. 

Securing cyber-physical systems against cyber-attacks: A hybrid network modelling approach

Cyber-physical systems, which integrate digital networks with physical processes, are vital to modern infrastructure. However, their connectivity makes them vulnerable to cyber-attacks that can disrupt critical services. This research will study vulnerabilities in cyber-physical systems and develop defence strategies using advanced modelling techniques. It will involve mathematical models, hybrid network structures, and the use of machine learning, complex network theory, and game theory to simulate attacker-defender interactions, test security strategies, and optimise resource allocation to improve system resilience and security. 

Protecting industrial control networks by disrupting reconnaissance through traffic-analysis resistance techniques 

Industrial control networks are essential to national infrastructure, managing critical systems in energy, manufacturing, and transport. However, these networks are vulnerable to cyber-attacks, as many rely on legacy technologies that lack modern security protections. Attackers exploit these weaknesses through reconnaissance, identifying key targets even when network traffic is encrypted. Disrupting this early attack stage is crucial to preventing cyber-physical threats. This project will apply traffic-analysis resistance techniques to enhance the security of industrial control networks, safeguarding critical infrastructure from emerging cyber threats. 

  • Supervisor: Prof Steven Murdoch (UCL Computer Science). The secondary supervisor is to be confirmed.
  • Research themes: Cyber-Physical Systems (main theme) and Online Communication (secondary theme)
RF Awareness and Fingerprinting for Cyber-Physical Security (RF-FD)

This project explores RF fingerprinting to detect and classify radio frequency (RF) signals linked to criminal and terrorist activities. Unlike traditional cyber detection methods, RF signatures cannot be easily spoofed, making them a valuable tool for cyber-physical security. The research also addresses threats to critical national infrastructure (CNI), detecting unauthorized IoT devices that could disrupt operations. The project involves modelling RF signals, conducting real-world experiments, and applying machine learning to analyze captured data.

Supporting preparedness and response to cyber-attacks in hospitals 

In the context of cyber-attacks on healthcare systems rising, this project investigates how hospitals can prepare for and respond to digital disruptions. It will identify the impact of cyber incidents on hospital operations, explore network effects when multiple hospitals are affected, and assess mitigation strategies such as manual record-keeping and patient prioritisation. The research will involve developing simulation-optimisation algorithms to test response strategies and support real-world decision-making.

Systemic risk in the Internet of Things

This is a multi-faceted project that seeks to first gain a better understanding of the systemic risks in the IoT ecosystem, and then understand the best way to address these. This will involve looking at technical aspects of IoT devices and protocols, their security policies and features, and how these interact with the wider environment and the incentives of various stakeholders.

You will use a number of approaches throughout the project, including modelling and simulation to explore risk, as well as studies (such as interviews, surveys, or workshops) with stakeholders to understand their roles.

User Agents and Simulations of the Physical World to Predict Cyber Physical Risks

In this PhD project, we propose to develop LLM based simulators which can be used to detect the possible attacks a cyber-physical system may face before the system has been used in reality. As part of this work, we propose to develop two different types of simulators: Initially we will focus on developing simulators that simulate a typical environment in which the system will be used, together with how a real user would interact with the system in such an environment, and detect whether there are any cyber-physical risks associated with the system in a typical environment when a regular user of the system is using it.

We will further focus on developing adversarial simulators, where the goal of the simulator would be to detect any possible risks coming from adversarial attacks.