XClose

UCL Cybersecurity CDT

Home
Menu

Projects

These are some projects that have been nominated but supervisors would entertain tweaked versions of these projects and ideas.

(Reducing) Crime in the Metaverse

1st supervisor: Professor Shane Johnson (email)

Project Description: What the metaverse exactly is, or will be, is not currently clear.  However, common depictions include VR social (and gaming) platforms that allow users to interact in an immersive way.  For example, in Horizon Worlds, players can create and explore worlds together in a VR environment.   

Read More

Business applications also exist and are likely to increase. Concerns regarding offending in such environments have existed for some time, and (for example) in November 2021, a Horizon Worlds beta tester reported being harassed in that environment.  Looking forwards, haptic suits, will make VR experiences more immersive still, potentially making offending in these environments more likely and also more harmful.  This project would explore the emerging threats in the metaverse, how users might keep themselves safe, and what could be done and by whom, to make the metaverse a safe place to be.

Autonomous Systems for Cybersecurity

1st supervisor: Professor Mirco Musolesi (personal website)

Project description: The supervisor is happy to discuss projects in the broad area of autonomous and semi-autonomous systems for cybersecurity, from different points of view, including potentially behavioural and socio-technical aspects, also through the involvement of supervisors from different disciplines. The project will be shaped according to the personal interests of the students.

Measuring cybersecurity behaviour

1st Supervisor: Dr Ingolf Becker  (personal website)

Project description: A lot of cybersecurity behaviours are measured through surveys. Their questions and associated constructs (and often associated tools for measuring them) have been borrowed from a range of disciplines, including psychology and organisational behaviour, and 'transmogrified' into security versions and/or 'mashed up' with other versions.

Read more

While perhaps valid in their original setting, the widespread re-use of these questions in different contexts raises questions of validity. This project would focus on a systematic analysis of existing measurement approaches to cybersecurity. Given the scale of textual data, the project will touch on NLP techniques as well as large data analysis pipelines.

Evidence Critical Systems

1st supervisor: Professor Steven Murdoch (email)

Project description: An important requirement of some computer systems is to produce evidence that can be relied upon to resolve disputes. If such a system fails, by producing incorrect or confusing evidence, consequences can be severe with people losing money or even being imprisoned (e.g. in the Post Office Scandal).

Read more

So far, there are no well-tested techniques for building such “evidence critical systems” so in this project we will investigate how to build computer systems that can produce evidence that would help fairly and efficiently resolve disputes, including through the legal system. Approaches that could be applied include cryptographic protection of important data, analysis of software to establish correctness, and usability evaluation to understand how evidence would be interpreted.

Gender and Technology

1st supervisor:  Dr Leonie Maria Tanczer (email)

Project description:  This is an open call for PhD projects that study the intersection points of gender, cybersecurity and technology. Proposals can cover a range of issues, dependent on the interest/skill set of the relevant applicant.

Read more

Topics can include, but are not limited to:

  • FemTech e.g., maternity or fertility technologies
  • Gender-sensitive examinations of topics e.g., open-source software/hardware communities
  • Motherhood and digital entrepreneurship 
  • Cyber/Xenofeminism
  • Feminist technology studies /Feminist theories of technology
  • Critical masculinity studies and tech culture
  • Recruitment and retention strategies of underrepresented groups in the tech sector 
  • Online harassment/technology-facilitated abuse ('tech abuse')

Successful applicants will be part of a vibrant and growing “Gender and Tech” research community at UCL Compter Science and will be actively involved in the research, teaching, and policy activities that the group pursues. Interested parties are strongly advised to familiarise themselves with the research background of the prospective supervisor and to discuss proposal ideas with them prior to handing in a submission. A strong interest in topics around gender, cybersecurity, and technology is a must for this opportunity, as are a very good academic track-record, and excellent verbal and written communication skills.

Machine Learning/Artificial Intelligence in/for Cybersecurity

1st supervisor: Professor Mirco Musolesi (personal website)

Project description: The supervisor is happy to discuss projects in the broad area of Artificial Intelligence/Machine Learning for cybersecurity. Areas of interests include (but not limited to) decision-making & planning using AI techniques, e.g., (Multi-agent) Reinforcement Learning; anomaly detection; resilience of networked systems and critical infrastructures. The project will be shaped according to the personal interests of the students.

Preventing the introduction of vulnerabilities

1st supervisor: Dr Jens Krinke (email)

Project description: Much work has been done to analyse source code and detect potential vulnerabilities contained in source code. Usually, such vulnerabilities are bugs that need to be fixed. However, not much is known about how such vulnerabilities come into existence.

Read more

Has a shortcut been made? Has some insecure code been reused? Has a corner case been ignored? Has a vulnerability been caused by a third party component? Has the code been automatically generated, e.g. by GitHub’s Copilot? The aim of this project is to study how vulnerabilities come into existence, find ways to identify early warning sign, and devise approaches that prevent the creation and introduction of vulnerabilities, either through human developers or code generation tools like GitHub’s Copilot.

Securing the Programmable Internet

1st supervisor: Dr Stefano Vissicchiov (email)

Project description: Despite much painful experience, the Internet not only allows but also supports service-disruptive security attacks. Indeed, the complete openness of the unguarded Internet infrastructure provides means for malicious users to carry out remote attack, and even amplify their magnitude.

Read more

We may now have a real, unique opportunity to finally change this status quo.
By enabling full programmability of networks, recently emerging paradigms (such as Software Defined Networking) and technologies (such as programmable network hardware) have the potential to be a game changer for Internet security.
Programmability and automation indeed promise to make detection and mitigation of Internet-based attacks feasible, cost effective and advantageous within the Internet’s core.
I am interested in supervising projects on the design, implementation and evaluation of techniques, mechanisms and systems that leverage network programmability to build the next generation of Internet defences.

Securing the Socio-technical elements of Digital Twins

Supervisors: Dr Uchenna Ani (IRIS profile) & Professor Jeremy Watson (IRIS profle)
 
Project description: The emergence of ‘Digital Twins’ as a concept, meaning static or dynamic models and simulations of real-world structures, has brought concerns relating to the cybersecurity of these models, the associated data, and inferences that can be drawn from combinations of partial information.

Read more

Initial concerns came to light concerning Building Information Modelling, where designers were sharing sensitive details on the open web. Co-development requires information-sharing, however, so data structures used in DT models must allow dynamic ‘permissioning’ of users in distributed design teams. Further complexity arises when live data feeds from sensors, etc. must be combined with static design (CAD) data. Access to these could be highly sensitive, and user validation and permissioning may need to happen over timescales of seconds. This proposal seeks to explore the human/machine interactions that can promote productive yet secure design and operation.

The Language of Trust in Computer-Mediated Transactions

1st supervisor: Professor Licia Capra (email)

Project description: Sharing economy platforms, such as Airbnb and TaskRabbit, use a relatively new economic model that promotes inclusion and fairer distribution of wealth, compared to traditional models of production and consumption.

Read more

This model is based on the sharing of spare resources, be them their own home, car, skills, etc. Key to the success of these platforms is trust: users who have never met before, and who have never conducted this type of business in the past, create a profile on a sharing economy platform, and start engaging in transactions with complete strangers. How do peers decide whom to trust, in this type of computer-mediated economic models, where there is often not much more than a picture and a profile description to inform a trust decision? We know from decades of studies in the social sciences that spoken language plays a big role in the formation of trust between individuals; for example, people who use a personal, plainspoken, positive and plausible language are often perceived more trustworthy than those who do not. How does this translate in the digital world, where facial and tone cues are lost? And what happens when smart (digital) assistants start mediating human conversations? The goal of this research project is to study how trust is formed, and how it evolves, in this type of computer-mediated settings. We aim to develop computational linguistics models that explain the impact of different language features on trust decisions, and with what impact on inclusion and participation in sharing economy platforms.

Uncooperative Sensing using Smart Connected Devices

1st supervisor: Dr Kevin Chetty (email) – UCL Department of Security & Crime Science 

Project description: The Internet of Things (IoT) is emerging as the next step-change in the evolution of the internet and it is estimated that there will be more than 21 billion connected devices by 2025.

Read more

The rapid and global rollout of these ‘smart’ technologies is creating congested wireless landscapes where communication signals such as WiFi, Bluetooth and 5G pervade our homes, towns and cities. Alongside this technological growth will emerge new possibilities for ubiquitous opportunistic sensing whereby these omnipresent signals are exploited for transport monitoring, ambient assisted living (e-healthcare), operational policing, gesture control etc, as well as more sinister applications such as covert spying by adversaries, which includes through-the-wall monitoring. 
This research project will investigate new techniques for opportunistic sensing that can be applied to our evolving IoT ecosystems, and gauge future capabilities that are both beneficial and unfavourable to society. The project will require students to develop knowledge and skills in both technical (e.g. machine learning, signal processing, communications etc) and non-technical areas (e.g. Crime Science, surveillance legislation etc) relevant to the topic.

Cybersecurity for Connected and Autonomous Vehicles

Potential Supervisors: Nilufer Tuptuk (IRIS profile)

Project description: Connected Autonomous Vehicles (CAV) rely on a large set of complex data obtained from a wide range of internal and external sensors, and AI techniques to perceive their environment and make critical decisions to enable autonomous driving. 

Read more

Ensuring the integrity and security of the data processes is essential to the proper functioning of CAVs and to ensure the safety of users, other vehicles and the supporting environment. The supervisor is happy to discuss potential projects related to cybersecurity for connected autonomous vehicles including potential vulnerabilities of AI techniques, and testing and validating cybersecurity processes. 

Citizen participation in national cybersecurity

Potential supervisors: Peter Novizky (IRIS profile), Nilufer Tuptuk (IRIS profile)

Project description: Cybersecurity of industrial IoT (IIoT) systems together with critical national infrastructure (CNI) along their respective networks has received considerable attention in recent years. The emergence of novel AI-based threats pose an additional challenge for complex industrial systems’ safety and security.

Read more

The protection of these systems with AI countermeasures, along with scalability demands and other trade-offs, carry inherent vulnerabilities too. Therefore, an effective protection of CNI remains a considerable challenge for the future of these systems.

In this project the PhD candidate will explore the social and technical requirements, conditions, and ethical challenges of citizen participation in the protection of CNI and IIoT systems. These may include, but are not limited to:

  • the challenges associated with distributed cybersecurity systems
  • citizen-participation in distributed computing for dynamic national cybersecurity needs
  • the requirements and permissibility of voluntariness and the limits of regulatory policies
  • HW/SW requirements for implementation of such policies
  • proposal for regulation of active and/or passive, opt-in or opt-out regimes of citizen-participation in national cybersecurity protection

    Role of time in time-critical cybersecurity decisions

    Potential supervisors: Peter Novizky (IRIS profile), Nilufer Tuptuk (IRIS profile)

    Project description: One of the key recognition of the National Digital Twin Programme is the role of time and aspects of timeliness in the datasets about infrastructures. As more and more critical national infrastructure (CNI) and large industrial complex systems are managed by digital technologies, they are also challenged and defended by artificial intelligence (AI) in real-time.

    Read more

    Thus, the time-critical nature implies not only datasets, but they poignantly influence the nature and morality of decision-making, possible reaction time intervals, and the justifiability of such decisions.

    The importance of time and time-critical automated decisions pose challenging ethical questions and legal liabilities for countries, operators, businesses, as well as users. Therefore, this project will investigate:

    • the relevance of time in ethical decision-making in time-critical systems
    • the threats of social engineering in time-critical cybersecurity decisions
    • relevance of time and timeliness in digital twin solutions, focusing on smart cities and private homes
    • inherent vulnerabilities of AI systems from the perspective of time-critical automated or augmented decision-making, e.g. lack of data; reliance on historical data that influences future decisions

      Preventing Legged Robots from Adversarial Attacks

      1st Supervisor: Dr Dimitrios Kanoulas (personal website, lab website)

      Project description: Legged robots are already part of our world, helping with autonomous inspection and monitoring tasks. The autonomy relies on their sensory system - the acquired information might either be internal to the robot (e.g., joints, acceleration, etc.) or external (e.g., vision, forces, etc.). Such highly complex robots are very sensitive and dependent on their sensory system.

      Read more

      A wrong reading may result in a robot imbalance and failure, which might be hard or impossible to recover from (imagine a robot falling from a hill because of a wrong step, or get stack in a unstructured environment because visually thought it was a structured one). Adversarial attacks on the robot sensory system is thus a potential and very possible threat in such autonomous robotic systems. This might include noise-based attacks in all the sensory systems. This PhD topic will focus on this security question: what type of attacks can take place to legged robots? what type of prevention could we take to make legged robot navigation safe and robust?

      The PhD topic will investigate traditional and machine learning techniques to deal with adversarial attacks on legged robots, and the methods will be developed and tested on real legged robots, such as ANYbotics ANYmal and Unitree A1, Go1, and B1 (https://youtu.be/9QEWIEDkshI).

      Future crime threats at the intersection of cybersecurity and synthetic biology

      Joint Supervisors: Professor Shane Johnson (IRIS profile), Dr Darren Nesbeth (IRIS profile)

      Project description: When new technologies, such as synthetic biology are developed, it is common for their crime and security implications to be overlooked or given inadequate attention, which can lead to a ‘crime harvest’.

      Read more

      Potential methods for the criminal exploitation of synthetic biology need to be understood to assess their impact, evaluate current policies and interventions and inform the allocation of limited resources efficiently. UCL Crime Science and UCL Biochemical Engineering Departments have joined forces to offer a project to investigate the intersection of cybersecurity and synthetic biology using advanced data capture and analytical techniques and state-of-the-art wet laboratory facilities and training.

      Philosophical and logical foundations of security and its methodology

      Primary Supervisor: David Pym

      Description: The philosophy of information is increasingly well-developed, providing philosophical analysis of the notion of information both from a historical and a systematic perspective. With the emergence of the empiricist theory of knowledge in early modern philosophy, the development of various mathematical theories of information in the twentieth century and the rise of information technology, the concept of 'information' has acquired a central place in the sciences, in engineering, and in society.

      Read more

      In logic, there have been substantive developments of systems of logic having semantics with resource and information interpretations. See

      https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fplato....

      for a substantive discussion.

      However, the philosophy of information security remains much less well developed. Security provides a way of thinking about ecosystems of systems that is at once both adapted to the concerns of security and a quite general perspective on the form and behaviour of ecosystems of systems. Moreover, the concepts of information security seem to be applicable to physical security and hybrid information-physical security.

      So, alongside a general systems perspective sits the perspective on systems that is provided by the concepts of security, including confidentiality, integrity, and availability as well as sustainability and resilience. Moreover, alongside this perspective sit the analyses of the value and behaviour of systems that are given by economics and psychology. Thus the conceptual organization of security iscomplex and delicate, but largely undescribed.

      The methodology of the study and practice of security also raises a number of issues around the scientific status of security.

      The aim of this PhD project is to explore the philosophical and logical foundations of security and its methodologies building on the perspectives of the philosophy of information, the philosophy of computing, systems theory, and logic. Here are some Bentham's Gaze posts that give a flavour of some of these ideas:

      https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.be...

      https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.be...

      https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.be...

      https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.be...

      organisations-can-navigate-how-to-support-effective-security-behaviour-change

      The Limits of Liability Regimes for Emerging Digital Technologies

      Primary Supervisor: Irina Brass

      Description: This project addresses the critical responsibility, accountability, and liability challenges arising from the growing deployment, confluence, and coexistence of digital technologies such as the Internet of Things (IoT) and Artificial Intelligence (AI).

      Read more

      These digital technologies have very dynamic features that blur the boundaries of responsibility and accountability in complex supply and use chains, thus challenging existing liability regimes. First, they are “general purpose” technologies deployed in specific contexts that haven’t always been considered by their developers. Second, they can incur several modifications through their lifecycle if they are self-learning or adaptive in operation. Third, they can become easily compromised if they have cybersecurity vulnerabilities and are not patched correctly, without the awareness of users or manufacturers.

      The successful candidate for this project is expected to take an interdisciplinary approach across law, public policy, economics, and information security to investigate this topic. They should be self-motivated, rigorous, and proactive in their approach to research, writing, and academic life. Ability to read legal documents is essential for this project. Candidates are expected to submit a well-documented, thought-provoking PhD research proposal within the scope of this topic but have flexibility on the approach, angle, and research questions they want to pursue. 

      Cybersecurity of the Internet of Medical Things (IoMT)

      Primary Supervisor: Irina Brass

      Description: Growing evidence of cyberattacks, data leaks, and ransomware in digital healthcare is showing the extent to which this sector is seriously affected by poor cybersecurity practices and vulnerabilities associated with connected, intelligent medical devices, which constitute the Internet of Medical Things (IoMT). 

      Read more

      IoMT can include medical devices deployed in a clinical or healthcare setting, wearables, and implantables that have a medical purpose. They can be software-based or standalone Software as a Medical Device (SaMD). Known security vulnerabilities in the IoMT include weak authentication, unpatched software, legacy devices, or unsecured network access (to name but a few). In this project, the successful candidate will investigate: i) the current state of cybersecurity vulnerabilities and exploits in the IoMT, analysing available data from several databases of reported or recalled medical devices; ii) analyse the impact of these cybersecurity vulnerabilities on patient safety and the resilience of the IoMT and healthcare infrastructure; iii) consider socio-technical measures and interventions that can address these critical vulnerabilities, including what healthcare professionals closest to the point-of-care can do to contribute to the security and safety of the IoMT.

      The successful candidate for this project is expected to take an interdisciplinary approach across public policy and administration, information and network security, law, and behavioural science to investigate this topic. They are also expected to have very good command of both qualitative and quantitative approaches to investigate the current state of play of IoMT cybersecurity. They should be self-motivated, rigorous, and proactive in their approach to research, writing, and academic life. Candidates are expected to submit a well-documented, thought-provoking PhD research proposal within the scope of this topic but have flexibility on the approach, angle, and research questions they want to pursue.