UCL EPSRC Centre for Doctoral Training in Intelligent Integrated Imaging in Healthcare


Video analysis for reducing intraoperative injury in robotic-assisted surgery (23036)

Four-Year Funded Studentship - deadline: Friday 19th July 2024

1 July 2024

Supervision Team: Prof. Evangelos Mazomenos & Prof. Dan Stoyanov

A four-year funded MPhil/PhD studentship is available in the UCL Department of Medical Physics and Biomedical Engineering. This position will be hosted at the Wellcome / EPSRC Centre for Interventional and Surgical Sciences (WEISS) and involve interdisciplinary work with clinical teams. Funding will be at least the UCL minimum. Stipend details can be found here. 

The successful candidate will join the UCL CDT in Intelligent, Integrated Imaging in Healthcare (i4health) cohort and benefit from the activities and events organised by the centre.

The PhD studentship is part-funded by Medtronic. 

Project background

Robotic-assisted surgery (RAS) is well-adopted with over 1.8M RAS procedures performed every year. RAS is complex, highly variable, and as it is carried out under restricted access to the internal anatomy, operating without risk of injuring critical structures presents technical challenges and requires skill. Approximately 10-15% of UK surgical patients experience adverse events, of which 50% are preventable, while 10,624 adverse events relating to robotic procedures were reported in the US between 2000-13. Robotic-assisted hysterectomy (RAH) is a minimally invasive procedure for treating conditions of the female reproductive system. Urologic and bowel injury are the most common types of intraoperative injury during RAH and in many cases, this is due to operator (surgeon) errors.

The proposed research will develop surgical video analysis technology based on Artificial Intelligence (AI), focused on reducing the risk of intraoperative injury in RAH. Such technology has the potential to elevate the perception and understanding of the surgical environment, by identifying and highlighting critical anatomy and surgical tools and detecting surgical errors in real-time. Successful automation of these tasks can power computer assisted navigation systems to increase safety and efficiency during delivery of RAH.

Research aims

Objective 1 - Automated segmentation and tracking of surgical tools and critical anatomy: 
To develop machine learning methodologies for surgical video analysis for segmenting and tracking in real-time, surgical tools and critical anatomy, typically involved in RAH intraoperative injuries (bladder, ureter, colon). Development and benchmarking will be supported by fully anonymised clinical datasets and experiments in dry-lab settings with realistic artificial models. Successful completion of this objective will support the development of methods for automated error detection and ultimately computer-assisted navigation.   

Objective 2 – Automated adverse event detection in RAH: 
To introduce AI technology for automatically detecting critical events (e.g. surgical errors) in RAH and their severity. Guided by established clinical definitions and annotations of surgical errors and their consequences, this task will develop AI technology to detect adverse events during a RAH procedure with a particular focus to the high risk phases (e.g ). Fusing heterogenous data sources (robot kinematics, video) and semantic information (tool and tissue segmentation from Objective 1) we will extract features for expressing tool manipulation and tissue interactions for designing and training novel AI architectures to detect critical events and classify their severity. 

Objective 3 – Intelligent computer assisted navigation in RAH:
This final objective will integrate outputs from Objectives 1 and 2 for designing intelligent computer assisted navigation systems. These will focus on highlighting critical anatomy in the intraoperative video, and alerting the surgical team of high-risk moments will enhance safety, minimise risk of critical errors (long-term injuries) and increase the overall quality of RAH. Development and initial deployment with open-source robotic platforms will take place in lab settings with phantom models and ex-vivo tissue. Translation of the technology in clinical settings will follow. 

Person specification & requirements

Candidates must have a UK (or international equivalent) first class or 2:1 honours degree preferably in computer science, mathematics, engineering, or a comparable subject.

The ideal applicant will have an MSc in data science, computer vision or automatic control. The student is expected to have the desire to work in an interdisciplinary environment and a keen interest in biomedical engineering research that has a positive impact on the delivery of interventional healthcare. 

Good level of mathematical and computing skills and solid experience in computer programming (e.g. Python, MATLAB or similar) for data processing and algorithm development are essential. The student is also expected to demonstrate creative and critical thinking; excellent writing and oral communication skills; good working habits; ability for taking initiatives and working both in an independent and collaborative environment.

Experience with any of data modelling and analysis, computer vision, machine learning or control engineering, particularly with prior exposure in complex medical datasets would be advantageous but not essential.


This studentship is available for home and overseas fee-payers. Please see the page on UCL fee status here.

How to apply:

Application Deadline: Friday 19th July 2024

Please complete the following steps to apply.

  •  Send an expression of interest, current CV and names of two referees to Prof. Evangelos Mazomenos (e.mazomenos@ucl.ac.uk) and cdtadmin@ucl.ac.uk.  Please quote Project Code: 23036 in the email subject line.
  • Make a formal application to via the UCL application portal. Please select the programme code Medical Imaging RRDMEISING01 and enter Project Code 23036 under ‘Name of Award 1’.

Applications will be assessed on a rolling basis so please do apply as early as possible. If shortlisted, you will be invited for an interview.