XClose

UCL Cybersecurity CDT

Home
Menu

PhD student projects

Find out about current research projects at the Centre for Doctoral Training in Cybersecurity.

Examining politically motivated cyber-attackers

My research focuses on politically motivated cyber-attackers, integrating crime, political, and computer science. I am investigating the application of neutralisation theory to cybercrime. Particularly, I am interested in understanding the similarities and differences between far-left and far-right political hackers.

PhD research student: Sara Rubini

Supervisor/s: Prof. Paul Gill and Dr. Brian Klaas

Measuring Peer Surveillance and Its Impact on Privacy Behaviors in Online LGBTQ+ Communities

Online communities are important to the lives of LGBTQ+ people. In these digital spaces, LGBTQ+ people increasingly disclose more personal information as they receive community support and make connections with others. However, online communities are not always private, making it difficult for people to manage the audiences that see their personal information. It is equally difficult for people to discern how different audiences will perceive their posts. While it is known that LGBTQ+ people face digital harm from outside their communities, intracommunity conflicts remain understudied. As a result, online LGBTQ+ communities are vulnerable to privacy and safety risks due to forms of intracommunity peer surveillance. This thesis addresses this gap by measuring intracommunity peer surveillance in online LGBTQ+ communities and investigating its effect on people’s digital and offline behavior.

PhD research student: Kyle Beadle

Supervisor/s: Dr. Marie Vasek, Dr. Mark Warner, and Dr. Leonie Tanczer.

Disinformation Campaigns in Small Countries

Disinformation is an emerging global issue that threatens democracy and societal stability. It often uses narratives that align with pre-existing beliefs shaped by cultural, historical, and political contexts and national identities. My research seeks to gain an in-depth understanding of these contexts in smaller, more vulnerable countries. The primary aim is to understand the impact of disinformation campaigns on public perception, decision-making, and behaviour in these nations and to identify how different cultural and historical contexts are used. Additionally, longitudinal studies will be conducted to observe the evolution of disinformation narratives over time, with a focus on how international and domestic events influence these narratives, using computational methods such as social media analysis. Ultimately, my research will develop and test possible strategies to counter disinformation in smaller countries. The expected contributions of this research include a deeper theoretical understanding of how disinformation operates in diverse cultural and political settings, as well as practical strategies for mitigating its effects. This research aims to provide valuable insights for policymakers, media organisations, and civil society in their efforts to combat disinformation.

PhD research student: Ganbat Ganbaatar

Supervisor/s: Dr. Tristan Caulfield and Dr. Enrico Mariconti

System Design for Reliable Dispute Resolution

In today's cybersecurity landscape, the pursuit of absolute security still proves impossible, thus prompting a shift towards transparency-driven approaches. My research advocates for a paradigm shift in software reliability, emphasising transparency to support legal processes. The legal process is essential how transparent security enforces real-world consequences for malicious behaviour. Transparent security mechanisms aim to deter malicious actors by focusing on detecting rather than preventing misbehaviour. However, challenges persist in converting evidence of harm into tangible outcomes and addressing human error. Drawing on principles of reversible programming, I propose redirecting efforts toward establishing robust mechanisms for proving events that lead to outcomes. Through this discourse, we aim to foster greater accountability and trust in digital systems, advocating for a transparency-focused approach to cybersecurity practices.

PhD research student: Jay Dwyer-Joyce

Supervisor/s: Prof. Steven Murdoch

Logical Foundations of Cybersecurity and its Methodologies

My research project stands as a first step towards a systematization of the field and its methodologies, by means of a foundational investigation of key notions such as system, execution, etc, through the lenses of logic and formal methods in general. In particular, I have an interest in a novel semantical framework called ‘base-extension semantics’, as it provides a promising standpoint. For this reason, I also work in close contact with the Programming Principles, Logic and Verification Group in the CS department of UCL.

PhD research student: Gabriele Brancati Abate

Supervisor/s: Prof. David Pym

Metaverse Facilitated Child Sexual Abuse

This thesis examines how the metaverse and associated emerging technologies facilitate child sexual abuse. Integrating psychological and criminological perspectives, the research explores cyber-enabled offences, such as online grooming, and cyber-dependent crimes, including avatar-based abuse and AI-generated child sexual abuse material. It adopts an offence-focused prevention approach and considers how human factors inherent to CSA offences may be strategically disrupted to mitigate metaverse-CSA risks. The findings aim to inform targeted investigative and detection approaches, contributing directly to the development of effective safeguarding strategies. In brief, this thesis strives to provide essential insights for policymakers, technology developers, and child protection agencies to better protect minors from exploitation in digital and immersive settings.

PhD research student: Nichola Copson

Supervisor/s: Prof. Shane Johnson

Where to type: do the least to get the most

Type annotations offer several benefits, including early detection of type errors, improved code readability, and potential optimisation. In the past, developers were required to annotate all identifiers, which imposed a significant burden. More recently, languages like Python have introduced optional typing, giving developers the flexibility to decide whether and where to add annotations. This flexibility raises a new question: where should annotations be placed to maximise their benefit? We observe that annotations in certain locations can enable more types to be inferred. To take advantage of this, we construct a type inference graph to guide the annotation process. Our results show that, under the same annotation budget, using the graph for guidance yields two times more inferred types compared to the original developer annotation order. Moreover, this approach can detect the same type errors with 30% fewer annotations.

PhD research student: Yingbo Fu

Supervisor/s: Prof. Earl Barr and Dr. Ingolf Becker.

Humans in Cyber Incident Response Teams: Improving Effectiveness using Person-focused Research

Cyber incident response teams (or CIRTs) are responsible for detecting, analysing, containing, mitigating and recovering from occurrences that may threaten the confidentiality, integrity, or availability of information or an information system. This role can be demanding, as it requires many technical skills, but also many non-technical competencies in order to maximise performance both as an individual and as part of a team. As such, research into how to maximise performance in CIRTs is crucial in order to keep up with evolving threats. However, current research tends to focus on technical issues and frameworks often neglect to cover factors affecting the human element of the team. As such, I aim to delve into various elements of the working process of these CIRTs and whether they are designed with the human responder in mind. These may include elements involving individual factors, such as decision-making, situational awareness and general stressors; and those involving team factors, such as communication and collaboration. 

PhD research student: Liberty Kent

Supervisor/s: Dr. Ingolf Becker and Dr Nilufer Tuptuk.

Enhancing Security and Robustness of AI/ML Systems in Robotics

As AI/ML technologies become integral to autonomous systems, they introduce vulnerabilities that adversaries can exploit, potentially compromising safety and functionality. The research aims to identify and address security threats in applications like autonomous path planning, computer vision, and reinforcement learning. The study includes simulating adversarial attacks, such as brute-force attacks on path-planning
algorithms to uncover weaknesses and environmental factors that may affect system integrity. By implementing these attacks on both simulated and real-world robotic platforms, the research seeks to understand the limitations of current AI/ML algorithms under adversarial conditions.

PhD research student: Adrian Szvoren

Supervisor/s: Dr Nilufer Tuptuk and Prof. Dimitrios Kanoulas.

Exploring African agency and dynamics in global cybersecurity governance

The concepts of responsible state behaviour in cyberspace has received much critical interest within academic discourse, in particular focusing on cyber diplomatic fora such as the United Nations' Group of Governmental Experts (GGE) and Open-ended Working Group (OEWG) processes. This focus, in concentrating on the tensions between global powers, often defaults to the East-West dichotomy prominent in international and strategic studies, thereby overlooking the agency of actors from the Global South – particularly those from the African continent. Given their distinct post-colonial socio-economic, political and developmental contexts, this represents an analytical lacuna. Hence, this project focuses on the dynamics of African actors (state, private, and civil society) in the development of these normative frameworks in cyberspace. In adopting a multi-level (global, continental, regional, and domestic) approach to this work, it considers where, when and how these actors align or diverge in their stances in global cybersecurity fora, considering this against the backdrop of great power competition in cyberspace.

PhD research student: Chimdi Igwe

Supervisor/s: Prof. Madeline Carr and Dr Nilufer Tuptuk.

Cybercrime as a Service

Cybercrime as a Service (CaaS) refers to an economic model where a technically skilled actor offers a tool kit as a packaged service with easy access to tools and frameworks which provide all of the services a less experienced bad actor needs to carry out a successful cyberattack that would otherwise likely be out of their reach. My thesis investigates Cybercrime as a Service model and its surrounding ecosystem, how the CaaS model may develop in the future, and the specific challenges it creates for law enforcement and others to prevent or disrupt cybercrime.

PhD research student: Ema Mauko

Supervisor/s: Dr Enrico Mariconti and Prof Shane Johnson

   

Watch Cohort 2 PhD students presenting their projects

YouTube Widget Placeholderwww.youtube.com/watch?v=eGSG_gnKU3I

Henry Skeoch: "Cyber-insurance: what is the right price?"
 

YouTube Widget Placeholderwww.youtube.com/watch?v=xgWumFjgQh0

Antonis Papasavva: "Detecting hate and analyzing narratives of online fringe communities"
 

YouTube Widget Placeholderwww.youtube.com/watch?v=S6nBuS60cQI

Arianna Trozze: "Explaining prosecution outcomes for cryptocurrency-based financial crimes"