XClose

UCL Department of Security and Crime Science

Home
Menu

PhD scholarships

If you are applying for a PhD with us, you will need to think about how to fund your studies. Most applicants apply for a scholarship, either from UCL or another funding body.

UCL scholarships

We usually have a number of scholarships available through the Dawes Centre for Future Crime at UCL. These scholarships are offered via collaborations with our partner departments or external organisations and may differ in their details, so please be careful to read the exact requirements for each scholarship before applying. 


DAWES-UCL SECReT scholarships

Several scholarships are available for our Route A programme (four-year MRes + PhD) OR Route B programme (three-year PhD, no MRes).

These scholarships are available for pre-set topics or for open topics. Pre-set topics are specific topics that have been suggested by supervisors here at UCL and which they will be happy to supervise. Open topics are topics proposed by the applicant. Further details are below.

Please note: You may apply for either Route A (4 years, i.e. MRes+PhD) or Route B (3 years PhD only) for these scholarships. If you apply for Route B, depending on your background, in instances where we feel that you will benefit substantially from the extra year, we will recommend an offer for the Route A course and will only award a 4-year scholarship to the successful applicant.

EligibilityWhat the awards coverDeadline for applicationHow to apply
These awards are open to UK-level fee paying students only.Each award covers full stipend of approx. £17,631 per annum, full UK-level fees, plus conference funds of £1,200 per annum.Deadline for 2023 not yet set. However, we advise that you apply early, as we will be awarding the scholarships as soon as we identify excellent candidates.All applications are made through the usual UCL SECReT application procedure. Please visit the Applying for a PhD page on our website for information on the application process.

Pre-set topics

 
Anticipating and governing new threats around autonomous vehicles

Self-driving cars have moved rapidly from being seen as impossible to inevitable. The suggested benefits for public safety and transport systems are substantial, but there are a number of important questions around the security of new systems, the possible inability of drivers to escape dangerous situations (e.g. where a crowd is surrounding a self-driving car), and unclear liability in the event of catastrophe. As transport systems become increasingly interconnected, the possibility for error or terror to have broad ramifications increases. This project, based in UCL Science and Technology Studies, and carried out in close collaboration with Dawes Centre for Future Crime at UCL, will seek to understand the possible opportunities and challenges surrounding self-driving cars or ‘autonomous vehicles’. The research will be mainly qualitative, with the possibility for qualitative analysis of trends and other data. It will suit a candidate with interests and expertise in science and technology policy, law, sociology of science or criminology. The successful candidate likely have a strong first degree and an MA/MSc in a relevant discipline, with interests in combining perspectives from the social sciences, humanities and science and engineering. The student will work closely with the Driverless Futures project, which has been funded by the Economic and Social Research Council (ESRC). 

Potential supervisor(s): Dr Jack Stilgoe

Testing the credibility of future threat models in food fraud

Outline
Synthetic biology and biohacking are together broadening the toolbox available to criminals and their opportunities to commit profitable crime. We propose to edit the genome of equine and other cell lines in a selection of ways intend to render the cells non-detectable by PCR-based tests for horse meat currently under development by the LGC Group and the UK Food Standards Agency (FSA). This ‘ethical hacking’ project will assist agencies such as FSA in anticipating future threats to food authenticity in the context of horse meat and other food fraud scenarios.

Scope
Food fraud is becoming increasingly prevalent within the European food industry, partly due to the pressures faced by producers within the current challenging financial climate and also the international nature of modern food production. The recent Europe-wide issue involving the detection of the undeclared presence of horse meat in beef products destined for human consumption is one high profile example. These and other cases of meat-based food fraud have driven a robust response by the Food Standards Agency, and others, in supporting the establishment of internationally standardised, accurate analytical approaches to the quantitative detection of meat adulteration. These methods are based mostly upon the polymerase chain reaction (PCR) for amplifying DNA.

Background
The main drivers of the emerging discipline of synthetic biology are the rapidly decreasing cost of both DNA synthesis and DNA sequencing – both of which are dropping to the level of cheap, commoditised services. In turn, the CRISPR/Cas9 genome editing technology has reached such a high profile because it is as easy to use and low in cost as it is powerfully effective. All these factors have also combined to enable the emergence of the ‘biohacker’ movement, whereby members of the public can take into their own hands, kitchens, sheds and community spaces the tools of molecular bioscience previously restricted to universities and companies.

Potential supervisor(s): Professor Shane Johnson and Dr Darren Nesbeth 

Artificial Intelligence for securing cyber-physical systems

Critical national infrastructure (CNI) services such as utilities, manufacturing, transportation systems and healthcare, are becoming increasingly reliant on computer systems that control physical processes (so-called cyber-physical systems (CPS)) over a network. The adoption of new computing capabilities such as Edge computing, IoT sensors with wireless connectivity, and analytics, collectively known as the Industrial Internet of Things (IIoT) is seen as the next logical step in building smart and efficient CPSs in industrial applications such as agriculture, transportation, utilities and manufacturing. The deployment of IIoT into existing and new systems raise security concerns as these devices and networks have vulnerabilities that can be exploited by attackers. These environments are prone to attacks from a wide variety of adversaries with different sets of skills and capabilities such as disgruntled employees, nation-states, organised crime groups, hacktivists, and lone actors. As in other domains, there is an increased surge in applying AI-based models for improving the security of CPS and making the security more proactive. However, the reliability of these solutions are been questioned as attackers also use the same tools, AI-based techniques to design intelligent attacks that evade detection. Areas of interest include but are not limited to: i) investigating the effectiveness of AI-based security solutions in CPS against existing and future attacks; ii) developing AI-based solutions that prevent, detect and respond to attacks in real-time; and iii) developing performance metrics to explain and demonstrate the reliability of the AI-based security solutions. 

Potential supervisor(s): Dr Nilufer Tuptuk

Securing connected autonomous vehicles against adversarial attacks

Autonomous connected vehicles (ACV) process large sets of data from a wide range of external and internal sensors, such as cameras, LiDAR, radar, GPS and infrared sensors to perceive their environment to make critical decisions related to driving in real-time. The advancement in Artificial Intelligence, in particular, Machine Learning and Deep Learning have a critical role in processing this data to train and validate automation and ensure cars are able to navigate through the traffic effectively and safely. Over the recent years, there has been a significant amount of research on proposing adversarial attacks and some defence mechanisms against them, but we are yet to understand the impact of these attacks (i.e. potential to harm) and the effectiveness of the proposed defence mechanisms.  In this project the PhD candidate will investigate i) how Artificial Intelligence (AI) is being used to support automation and decision making, ii) develop a threat model for adversarial attacks; iii) analyse the impact of adversarial attacks on the vehicle and other road users when AI-based decision systems are under attacks; and iv) develop a security monitoring tool that can prevent, diagnose and mitigate adversarial attacks in real-time. 

Potential supervisor(s): Dr Nilufer Tuptuk

Citizen participation in national cybersecurity protection

Cybersecurity of industrial IoT (IIoT) systems together with critical national infrastructure (CNI) along their respective networks has received considerable attention in recent years. The emergence of novel AI-based threats pose an additional challenge for complex industrial systems’ safety and security. The protection of these systems with AI countermeasures, along with scalability demands and other trade-offs, carry inherent vulnerabilities too. Therefore, an effective protection of CNI remains a considerable challenge for the future of these systems.

In this project the PhD candidate will explore the social and technical requirements, conditions, and ethical challenges of citizen participation in the protection of CNI and IIoT systems. These may include, but are not limited to:

  • The challenges associated with distributed cybersecurity systems citizen-participation in distributed computing for dynamic national cybersecurity needs;
  • The requirements and permissibility of voluntariness and the limits of regulatory policies;
  • HW/SW requirements for implementation of such policies;
  • Proposal for regulation of active and/or passive, opt-in or opt-out regimes of citizen-participation in national cybersecurity protection.

Potential supervisor(s): Dr Peter Novizky and Dr Nilufer Tuptuk

Role of time in time-critical cybersecurity decisions

One of the key recognition of the National Digital Twin Programme is the role of time and aspects of timeliness in the datasets about infrastructures. As more and more critical national infrastructure (CNI) and large industrial complex systems are managed by digital technologies, they are also challenged and defended by artificial intelligence (AI) in real-time. Thus, the time-critical nature implies not only datasets, but they poignantly influence the nature and morality of decision-making, possible reaction time intervals, and the justifiability of such decisions.

The importance of time and time-critical automated decisions pose challenging ethical questions and legal liabilities for countries, operators, businesses, as well as users. Therefore, this project will investigate:

  • The relevance of time in ethical decision-making in time-critical systems;
  • The threats of social engineering in time-critical cybersecurity decisions;
  • Relevance of time and timeliness in digital twin solutions, focusing on smart cities and private homes;
  • Inherent vulnerabilities of AI systems from the perspective of time-critical automated or augmented decision-making, e.g. lack of data; reliance on historical data that influences future decisions.

Potential supervisor(s): Dr Peter Novizky and Dr Nilufer Tuptuk

Exploring and evaluating policy and regulatory methods, strategies and techniques to anticipate and address future crimes

In a very real sense ‘crimes of the future’ are an emergent property of the advance of civilisation. It is not a question of if new criminal opportunities will be exploited, but when and how. The Dawes Centre for Future Crime at UCL (DCFC) addresses these questions directly, aiming to both forecast the nature and spread of such crimes, and propose methods for tackling them effectively before they become established.

We are currently welcoming applications for a PhD to explore policy or regulatory methods, strategies and techniques used in the UK or elsewhere to anticipate and address future criminal threats emerging from social or technological change.  Proposals are welcome for qualitative or quantitative research exploring and evaluating the effectiveness and practical implications of any of the following:

  • Application of futures methods in crime-related policy and regulation;
  • Special policymaking or law-making techniques (e.g. policy or regulatory crime-proofing);
  • Specialised roles, task forces, think tanks or government agencies;
  • Future crime risk assessment and management by private or public organisations;
  • Special regulatory requirements for industry.

Proposals concerning other topics within the same area are also welcome.

Proposals using any theoretical framework are welcome, however it is essential that the proposed research is empirical and focuses on policy or regulatory practice which has the strong potential for practical impact or application. Comparative research is particularly welcome.

Potential supervisor: Dr Lorenzo Pasculli, UCL Security and Crime Science.

Preventing and responding to future crime in converging digital and physical worlds

The development and commercialisation of technologies that support the increasing digitalisation of our lives and even a convergence of digital and physical environments, such as, for instance, the metaverse, are likely to create new opportunities and motivations for crime. In order to be able to effectively prevent and respond to such criminal threats, new policy, regulatory, and enforcement frameworks might be required to complement or replace more traditional countermeasures. Various jurisdictions such as the EU, the UK, and Australia are introducing new legislation to make online environments safer. Rigorous and innovative research is required to support the design and implementation of these laws and other possible countermeasures.

We are currently welcoming applications for a PhD to explore the risks of future crime generated by digital technologies as well as possible countermeasures.  Proposals are welcome for qualitative or quantitative research exploring and evaluating:

  • Opportunities and motivations for crime generated by specific digital technologies - especially, but not limited to those allowing the integration of physical and digital environments such as for instance the metaverse or the Internet of Things;
  • Policy, regulatory, and enforcement frameworks to identify, assess and mitigate the crime risks of digital technologies and environments (e.g. regulatory requirements for industry, self-regulation, private-public partnerships and exchange of information);
  • Policy, regulatory, and enforcement frameworks to prevent and respond to crime enabled by digital technologies and environments (e.g. cybersecurity strategies, new enforcement or regulatory powers, online safety legislation, international law and soft law).

Proposals concerning other topics within the same area are also welcome.

Proposals using any theoretical framework are welcome, however, it is essential that the proposed research is empirical and focuses on policy or regulatory practice which has the strong potential for practical impact or application. Comparative and international research is particularly welcome.

Potential supervisors: Dr Lorenzo Pasculli, UCL Security and Crime Science and Professor Shane Johnson, UCL Security and Crime Science


If you have a topic that you would like to explore that is not covered by the pre-set topics, you may apply for an ‘open topic’ scholarship. This means you may develop your own idea for a research topic, and then approach academics at UCL to find two potential supervisors, and then apply for one of the DAWES-UCL SECReT scholarships. In this case, please detail your proposed research topic in your application – note that your topic must fit with the future crimes vision and agenda of the Dawes Centre for Future Crime. If you would like to check the suitability of your proposed topic before submitting an application please email Prof Shane Johnson

Remember that although our focus is future crime, this does not mean that we will only award scholarships to ‘high tech’ research topic proposals. We are equally interested in the way that, for instance, changes in society/demographics/people movement might influence crime – and we are happy to consider proposals that combine social sciences with engineering/mathematical/physical sciences. Possible research areas that we are happy to look at include (but are not restricted to) the following:

Applications
  • Drones 
  • Autonomous vehicles
  • Smart rail signalling systems
  • Non-GPS navigation
  • Blockchain 
  • Brainwave reading/control 
  • Smart lighting
  • Performance-enhancing prosthetics
  • Instructional technology
Generic technologies
  • Hyper-connectivity 
  • AI 
  • Robotics/Nanobots 
  • Quantum computing
  • SCADA 
  • 3D printing
  • Mass customisation
  • Portable, renewable power
  • Wearable ICT
  • Smart materials
  • Stealth technologies
  • Sensors, sensor fusion
  • IOT
  • Pharma
  • Chemical synthesis
  • GM/CRISPR
  • Advanced optics
  • Hacking (both senses)
Background changes
  • Climate change e.g. temperature, sea level/acidification, water, food shortage 
  • Mass migration
  • Antimicrobial resistance
  • Commodity scarcities
  • Commodity substitution e.g. Mo for Pt catalysts
  • Universal wage
  • New finance/banking models
  • New working patterns
  • New transport/movement patterns
  • Any concentration or dispersal of value, anywhere in the value chain

About the funders

The Dawes Centre for Future Crime at UCL was established in 2016 with a £3.7M grant from the Dawes Trust. It has the broad vision of completing cutting-edge, application-focused research designed to meet the challenges of the changing nature of crime. Research aims to both forecast the nature and spread of future crime opportunities, and to propose methods for tackling them effectively before they become established. 

UCL SECReT is a £17m international centre for PhD training in security and crime science at University College London, the first centre of its kind in Europe. We offer the most comprehensive integrated PhD programme for students wishing to pursue multidisciplinary security or crime-related research degrees.


UCL Cybersecurity CDT scholarships

These studentships are offered via the UCL Centre for Doctoral Training in Cybersecurity, an exciting collaboration between three UCL departments - Computer Science (CS), Security and Crime Science (SCS), and Science, Technology, Engineering and Public Policy (STEaPP) - designed to increase the capacity of the UK to respond to future information and cybersecurity challenges.

Several of the 10 studentships on offer via the CDT this year will be awarded to students whose research will span both the Computer Science and Security and Crime Science departments. (We would expect you to identify a potential supervisor in each dept when applying.)

For full details about the CDT in Cybersecurity and the studentships, please click here.


Other scholarships

There are other sources of funding available to graduate research students. The UCL website lists some of these, and you can visit external websites such as the Postgraduate Studentships website for funding options external to UCL. Please direct any questions you may have regarding other scholarships to the relevant funding organisation, as we are only able to answer questions about the DAWES-UCL SECReT and CDT Cybersecurity-DAWES scholarships listed above.