UCL Cybersecurity CDT


Project Proposals

These are some projects that have been nominated but supervisors would entertain tweaked versions of these projects and ideas.

Autonomous Systems for Cybersecurity

1st supervisor: Professor Mirco Musolesi (personal website)

Project description: The supervisor is happy to discuss projects in the broad area of autonomous and semi-autonomous systems for cybersecurity, from different points of view, including potentially behavioural and socio-technical aspects, also through the involvement of supervisors from different disciplines. The project will be shaped according to the personal interests of the students.

Measuring cybersecurity behaviour

1st Supervisor: Dr Ingolf Becker  (personal website)

Project description: A lot of cybersecurity behaviours are measured through surveys. Their questions and associated constructs (and often associated tools for measuring them) have been borrowed from a range of disciplines, including psychology and organisational behaviour, and 'transmogrified' into security versions and/or 'mashed up' with other versions. While perhaps valid in their original setting, the widespread re-use of these questions in different contexts raises questions of validity. This project would focus on a systematic analysis of existing measurement approaches to cybersecurity. Given the scale of textual data, the project will touch on NLP techniques as well as large data analysis pipelines.

Cybercriminal Risk Assessment and Management

1st supervisor: Professor Paul Gill (email)

Project description: Risk assessment and management are core components of policing, intelligence, judicial and clinical settings. Risk assessment involves determining the level of risk posed by an individual based on a consideration of available information. Risk management involves taking action to gain or maintain control of the risk posed. Although historically associated with judgements related to the risk of violence recidivism, risk assessment and management principles are increasingly applied to cybercrime. For example, the UK government recently introduced a national network of Prevent officers coordinated by the National Crime Agency’s National Cyber Crime Unit. The aim is to “deter individuals becoming involved (or continuing their involvement) in cybercrime.” (Home Office 2017:49). Since its establishment, there has been a “dramatic increase in referrals…for early intervention through the Prevent network” (Home Office 2017:49). The validated risk assessment instruments that do exist for other crime types are ill fitted for emergent cybercrimes undertaken by individuals with different sets of criminogenic needs and motivations whose actions are afforded by different technological developments that open their opportunity for offending. The existing science is largely based on fairly rudimentary correlation research designs with either unrepresentative or ecologically invalid samples. This project will develop the evidence base for risk assessment and management in the cybersecurity domain through new (e.g. self-report data) and existing police data, improved methodologies (e.g. network analyses, SEM), and by layering insights from cybersecurity into the practice of risk assessment and management.

Evidence Critical Systems

1st supervisor: Professor Steven Murdoch (email)

Project description: An important requirement of some computer systems is to produce evidence that can be relied upon to resolve disputes. If such a system fails, by producing incorrect or confusing evidence, consequences can be severe with people losing money or even being imprisoned (e.g. in the Post Office Scandal). So far, there are no well-tested techniques for building such “evidence critical systems” so in this project we will investigate how to build computer systems that can produce evidence that would help fairly and efficiently resolve disputes, including through the legal system. Approaches that could be applied include cryptographic protection of important data, analysis of software to establish correctness, and usability evaluation to understand how evidence would be interpreted.

Gender and Technology

1st supervisor: Dr Leonie Maria Tanczer (email)

Project description: This is an open call for PhD projects that study the intersection points of gender and technology. Proposals can cover a range of issues, dependent on the interest/skill set of the relevant applicant. Topics can include, but are not limited to:

  • FemTech e.g., maternity or fertility technologies
  • Gender-sensitive examinations of topics e.g., open-source software/hardware communities
  • Motherhood and digital entrepreneurship 
  • Cyber/Xenofeminism
  • Feminist technology studies /Feminist theories of technology
  • Critical masculinity studies and tech culture
  • Recruitment and retention strategies of underrepresented groups in the tech sector 
  • Online harassment/technology-facilitated abuse

Successful applicants will be part of a vibrant and growing research community on gender and technology at UCL STEaPP, have the chance to work with the “Gender and IoT” lab, and will be actively involved in the research, teaching, and policy activities happening in and being facilitated throughout the department. Interested parties are strongly advised to familiarise themselves with the research background of the prospective supervisor and to discuss proposal ideas with them prior to handing in a submission. A strong interest in topics around gender and technology is a must for this opportunity, as are a very good academic track-record, and excellent verbal and written communication skills.

IoT Cybersecurity and the Limits of Liability Laws

1st supervisor: Dr Irina Brass (IRIS profile)
Project description: As the number of Internet-connected devices (IoT) continues to grow, the implications of IoT security vulnerabilities and their exploitation become more and more serious. IoT devices with poor security specifications raise several concerns, such as risks to physical safety and security, data breaches, identity theft, ransomware, and DDoS attacks. Unfortunately, current liability frameworks such as product liability laws remain unclear as to how responsibility over software-related harms is attributed. In this doctoral research project, the successful candidate will investigate the limits of liability laws in relation to current and emerging cybersecurity, privacy, and safety risks associated with smart, connected devices. You are expected to take an interdisciplinary approach across law, economics, public policy, and information security to investigate this topic. Ability to read legal documents and case law pertaining to software liability is highly desirable for this topic.  

Logic for policy models in security ecosystems

1st supervisor: Professor David Pym (personal website)

Project description: There is a famous quote from Grace Hopper: `Life was simple before World War II. After that, we had systems.' 
Now, systems are pervasive. They interact with and depend on each other and we, in turn, depend on them. It has become important to be able to think about not just a single system, but also its interactions with other systems - it has become necessary to think of ecosystems.
From the perspective of security, it is particularly significant that our ecosystems of concern are socio-technical, encompassing not only technical components, but also economic, human, and policy or regulatory aspects. 
It is of increasing importance to be able to reason rigorously about the design and behaviour of systems and ecosystems. In particular, is of increasing importance to be able to reason rigorously about the security of systems and ecosystems. A key approach to reasoning about systems is based on the idea of modelling. In the world of reasoning about ecosystems of (distributed) systems. 
Examples of questions to be addressed include: 

  • Logical modelling of the structure of systems ecosystems, with particular reference to compositional structure, substitution, and local reasoning (in the sense of Separation Logic and related systems). 
  • Policy design for decentralized, distributed systems.  
  • Logical modelling of agents’ reasoning about decentralized, distributed systems.  

Examples of logical work to be done include:

  • Developing the semantics, proof theory, and meta-theory of substructural modal logics, including epistemic and deontic systems.  
  • Developing theories of modelling distributed systems — including their people, process, and technology components — and their management policies using logical tools such as local reasoning (in the sense of Separation Logic and related systems)   and abduction. 
  • Integrating methodologies for constructing models with the   logical theories of the constructed models. 

This is a logic project, but it will appeal to an aspiring logician who is interested in the role of logic in the broader systems (and security) environment, both as a reasoning tool and a modelling tool. Nevertheless, there will be ample opportunity to develop substantial logical theory. 
Candidates should have a strong background in formal logic, with strong mathematical skills, and an interest in developing skills in modelling systems, security, and policy. Interests and skills in simulation modelling would be an advantage.

Machine Learning/Artificial Intelligence in/for Cybersecurity

1st supervisor: Professor Mirco Musolesi (personal website)

Project description: The supervisor is happy to discuss projects in the broad area of Artificial Intelligence/Machine Learning for cybersecurity. Areas of interests include (but not limited to identification&obfuscation techniques; anomaly detection; resilience of networked systems and critical infrastructures; decision-making & planning. The project will be shaped according to the personal interests of the students.

Preventing the introduction of vulnerabilities

1st supervisor: Dr Jens Krinke (email)

Project description: Much work has been done to analyse source code and detect potential vulnerabilities contained in source code. Usually, such vulnerabilities are bugs that need to be fixed. However, not much is known about how such vulnerabilities come into existence. Has a shortcut been made? Has some insecure code been reused? Has a corner case been ignored? Has a vulnerability been caused by a third party component? Has the code been automatically generated, e.g. by GitHub’s Copilot? The aim of this project is to study how vulnerabilities come into existence, find ways to identify early warning sign, and devise approaches that prevent the creation and introduction of vulnerabilities, either through human developers or code generation tools like GitHub’s Copilot.

Securing the Programmable Internet

1st supervisor: Dr Stefano Vissicchiov (email)

Project description: Despite much painful experience, the Internet not only allows but also supports service-disruptive security attacks.
Indeed, the complete openness of the unguarded Internet infrastructure provides means for malicious users to carry out remote attack, and even amplify their magnitude.
We may now have a real, unique opportunity to finally change this status quo.
By enabling full programmability of networks, recently emerging paradigms (such as Software Defined Networking) and technologies (such as programmable network hardware) have the potential to be a game changer for Internet security.
Programmability and automation indeed promise to make detection and mitigation of Internet-based attacks feasible, cost effective and advantageous within the Internet’s core.
I am interested in supervising projects on the design, implementation and evaluation of techniques, mechanisms and systems that leverage network programmability to build the next generation of Internet defences.

Securing the Socio-technical elements of Digital Twins

Supervisors: Dr Uchenna Ani (IRIS profile) & Professor Jeremy Watson (IRIS profle)
Project description: The emergence of ‘Digital Twins’ as a concept, meaning static or dynamic models and simulations of real-world structures, has brought concerns relating to the cybersecurity of these models, the associated data, and inferences that can be drawn from combinations of partial information. Initial concerns came to light concerning Building Information Modelling, where designers were sharing sensitive details on the open web. Co-development requires information-sharing, however, so data structures used in DT models must allow dynamic ‘permissioning’ of users in distributed design teams. Further complexity arises when live data feeds from sensors, etc. must be combined with static design (CAD) data. Access to these could be highly sensitive, and user validation and permissioning may need to happen over timescales of seconds. This proposal seeks to explore the human/machine interactions that can promote productive yet secure design and operation.

The Language of Trust in Computer-Mediated Transactions

1st supervisor: Professor Licia Capra (email)

Project description: Sharing economy platforms, such as Airbnb and TaskRabbit, use a relatively new economic model that promotes inclusion and fairer distribution of wealth, compared to traditional models of production and consumption. This model is based on the sharing of spare resources, be them their own home, car, skills, etc. Key to the success of these platforms is trust: users who have never met before, and who have never conducted this type of business in the past, create a profile on a sharing economy platform, and start engaging in transactions with complete strangers. How do peers decide whom to trust, in this type of computer-mediated economic models, where there is often not much more than a picture and a profile description to inform a trust decision? We know from decades of studies in the social sciences that spoken language plays a big role in the formation of trust between individuals; for example, people who use a personal, plainspoken, positive and plausible language are often perceived more trustworthy than those who do not. How does this translate in the digital world, where facial and tone cues are lost? And what happens when smart (digital) assistants start mediating human conversations? The goal of this research project is to study how trust is formed, and how it evolves, in this type of computer-mediated settings. We aim to develop computational linguistics models that explain the impact of different language features on trust decisions, and with what impact on inclusion and participation in sharing economy platforms.

Trans* Violence Online: Technology-Facilitated Abuse (“Tech Abuse”) Against Trans and Non-Binary People

1st supervisor: Dr Leonie Maria Tanczer (email)

Project description: This is a call for a PhD student interested in studying online forms of violence and harassment against the transgender and gender non-conforming community. The exact remit of the project will be set by the student based on their particular research interest and skill set. Whilst the student will be able to further refine their methodological approach in the first year of their PhD, an aspired research vision must be determined at the application stage and showcased in the applicant’s initial PhD proposal (which the supervisor requires for this submission). A strong interest in topics around gender and technology is a must for this opportunity. Besides, prior experience of, for example, having worked with/for trans and non-binary organisations is strongly welcomed. The project is ideal for a self-motivated, dedicated, and organised student with a very good academic track-record. Excellent verbal and written communication skills are expected. The successful applicant will have the chance to be affiliated with the “Gender and IoT” research lab at UCL STEaPP, with the candidate having a chance to gain interdisciplinary teaching and policy experience throughout their studies. Interested parties are strongly advised to familiarise themselves with the research background of the prospective supervisor and to discuss proposal ideas with them prior to handing in a submission.

Uncooperative Sensing using Smart Connected Devices

1st supervisor: Dr Kevin Chetty (email) – UCL Department of Security & Crime Science 

Project description: The Internet of Things (IoT) is emerging as the next step-change in the evolution of the internet and it is estimated that there will be more than 21 billion connected devices by 2025. The rapid and global rollout of these ‘smart’ technologies is creating congested wireless landscapes where communication signals such as WiFi, Bluetooth and 5G pervade our homes, towns and cities. Alongside this technological growth will emerge new possibilities for ubiquitous opportunistic sensing whereby these omnipresent signals are exploited for transport monitoring, ambient assisted living (e-healthcare), operational policing, gesture control etc, as well as more sinister applications such as covert spying by adversaries, which includes through-the-wall monitoring. 
This research project will investigate new techniques for opportunistic sensing that can be applied to our evolving IoT ecosystems, and gauge future capabilities that are both beneficial and unfavourable to society. The project will require students to develop knowledge and skills in both technical (e.g. machine learning, signal processing, communications etc) and non-technical areas (e.g. Crime Science, surveillance legislation etc) relevant to the topic.

Attribution and Zero Knowledge Proofs in International Cybersecurity

1st supervisor: Madeline Carr (IRIS profile)

At the international level, attribution of cyber incidents is a key impediment to making progress on state responses. In some instances, state actors have evidence that they believe is sufficient to confirm who was behind an attack.  However, even when they do have that evidence, they are often unable to share it as doing so would reveal too much about thier own intelligence gathering practices.  This leaves them with the option of either attributing illicit behaviour without evidence (which can be plausibly denied by the accused actor), of not attributing activity that they believe is illegal, or of making their complaints known through diplomatic (but not public) channels.

Zero knowledge proofs are a mechanism for sharing information without revealing details about how that information is known.  Used for sharing cryptographic keys and other security information, this is a mature field that is applied to many security contexts but it has not been employed in this context.  This PhD will bring together an understanding of international law, zero knowledge proofs, and diplomacy to address a critical problem in international security.

Securing connected autonomous vehicles against adversarial attacks

Potential Supervisors: Nilufer Tuptuk (IRIS profile)

Autonomous connected vehicles (ACV) process large sets of data from a wide range of external and internal sensors, such as cameras, LiDAR, radar, GPS and infrared sensors to perceive their environment to make critical decisions related to driving in real-time. The advancement in Artificial Intelligence, in particular, Machine Learning and Deep Learning have a critical role in processing this data to train and validate automation and ensure cars are able to navigate through the traffic effectively and safely. Over the recent years, there has been a significant amount of research on proposing adversarial attacks and some defence mechanisms against them, but we are yet to understand the impact of these attacks (i.e. potential to harm) and the effectiveness of the proposed defence mechanisms.  In this project the PhD candidate will investigate i) how Artificial Intelligence (AI) is being used to support automation and decision making, ii) develop a threat model for adversarial attacks; iii) analyse the impact of adversarial attacks on the vehicle and other road users when AI-based decision systems are under attacks; and iv) develop a security monitoring tool that can prevent, diagnose and mitigate adversarial attacks in real-time. 


Citizen participation in national cybersecurity

Potential supervisors: Peter Novizky (IRIS profile), Nilufer Tuptuk (IRIS profile)

Cybersecurity of industrial IoT (IIoT) systems together with critical national infrastructure (CNI) along their respective networks has received considerable attention in recent years. The emergence of novel AI-based threats pose an additional challenge for complex industrial systems’ safety and security. The protection of these systems with AI countermeasures, along with scalability demands and other trade-offs, carry inherent vulnerabilities too. Therefore, an effective protection of CNI remains a considerable challenge for the future of these systems.

In this project the PhD candidate will explore the social and technical requirements, conditions, and ethical challenges of citizen participation in the protection of CNI and IIoT systems. These may include, but are not limited to:

  • the challenges associated with distributed cybersecurity systems
  • citizen-participation in distributed computing for dynamic national cybersecurity needs
  • the requirements and permissibility of voluntariness and the limits of regulatory policies
  • HW/SW requirements for implementation of such policies
  • proposal for regulation of active and/or passive, opt-in or opt-out regimes of citizen-participation in national cybersecurity protection


Role of time in time-critical cybersecurity decisions

Potential supervisors: Peter Novizky (IRIS profile), Nilufer Tuptuk (IRIS profile)

One of the key recognition of the National Digital Twin Programme is the role of time and aspects of timeliness in the datasets about infrastructures. As more and more critical national infrastructure (CNI) and large industrial complex systems are managed by digital technologies, they are also challenged and defended by artificial intelligence (AI) in real-time. Thus, the time-critical nature implies not only datasets, but they poignantly influence the nature and morality of decision-making, possible reaction time intervals, and the justifiability of such decisions.

The importance of time and time-critical automated decisions pose challenging ethical questions and legal liabilities for countries, operators, businesses, as well as users. Therefore, this project will investigate:

  • the relevance of time in ethical decision-making in time-critical systems
  • the threats of social engineering in time-critical cybersecurity decisions
  • relevance of time and timeliness in digital twin solutions, focusing on smart cities and private homes
  • inherent vulnerabilities of AI systems from the perspective of time-critical automated or augmented decision-making, e.g. lack of data; reliance on historical data that influences future decisions


Preventing Legged Robots from Adversarial Attacks

1st Supervisor: Dr Dimitrios Kanoulas (personal website, lab website)

Project description:  Legged robots are already part of our world, helping with autonomous inspection and monitoring tasks.  The autonomy relies on their sensory system - the acquired information might either be internal to the robot (e.g., joints, acceleration, etc.) or external (e.g., vision, forces, etc.).  Such highly complex robots are very sensitive and dependent on their sensory system.  A wrong reading may result in a robot imbalance and failure, which might be hard or impossible to recover from (imagine a robot falling from a hill because of a wrong step, or get stack in a unstructured environment because visually thought it was a structured one).  Adversarial attacks on the robot sensory system is thus a potential and very possible threat in such autonomous robotic systems.  This might include noise-based attacks in all the sensory systems.  This PhD topic will focus on this security question:

  • what type of attacks can take place to legged robots?
  • what type of prevention could we take to make legged robot navigation safe and robust?

The PhD topic will investigate traditional and machine learning techniques to deal with adversarial attacks on legged robots, and the methods will be developed and tested on real legged robots, such as ANYbotics ANYmal and Unitree A1, Go1, and B1  (https://youtu.be/9QEWIEDkshI).


Future crime threats at the intersection of cybersecurity and synthetic biology

Joint Supervisors: Professor Shane Johnson (IRIS profile), Dr Darren Nesbeth (IRIS profile)

Project description: When new technologies, such as synthetic biology are developed, it is common for their crime and security implications to be overlooked or given inadequate attention, which can lead to a ‘crime harvest’. Potential methods for the criminal exploitation of synthetic biology need to be understood to assess their impact, evaluate current policies and interventions and inform the allocation of limited resources efficiently. UCL Crime Science and UCL Biochemical Engineering Departments have joined forces to offer a project to investigate the intersection of cybersecurity and synthetic biology using advanced data capture and analytical techniques and state-of-the-art wet laboratory facilities and training.