XClose

Wellcome / EPSRC Centre for Interventional and Surgical Sciences

Home
Menu

PhD Projects

Please note applications for the Academic Year 2018-2019 have now closed. 

Why study with our Centre?

The Centre brings together a wide team of engineering and clinical experts working to develop new surgical technologies. Training is affiliated with the UCL EPSRC Centre for Doctoral Training In Medical Imaging. We offer:

  • An extensive number of research projects to support translational research in surgical and interventional sciences
  • Project collaborations with 9 leading research hospitals and their associated biomedical research centres
  • Connections to government bodies, industry partners and charities
  • The chance to translate novel interventional methods from lab to clinic and the marketplace 


Funding
We offer a number of funded studentships, which cover:

  • Fees: UK/EU fees for three or four years depending on project
  • Stipend: at least £16,851 per annum tax free for the duration of project
  • Project related consumable and travel funds


Who can apply?
We are always on the lookout for the best and brightest students. Successful applicants will:

  • Have achieved or be predicted a first or upper second degree, preferably in a physical science discipline. Applicants with degrees in chemistry and life sciences and demonstrably strong mathematical skills are welcome
  • Have a demonstrable interest in healthcare engineering for surgical and interventional sciences


How do I apply?
To apply please send a CV and covering letter to Kate Litwinczuk at weiss-vacancies@ucl.ac.uk with the subject heading 'WEISS PhD application'. Application deadline has now closed. 

For informal enquiries please contact Dr Matt Clarkson (m.clarkson@ucl.ac.uk).

Please find an overview of currently available PhD projects below:

Currently available PhD Projects can be viewed here:

Image Fusion and Radiation Exposure Monitoring During Complex Aortic Aneurism Repair

The aim of this project is to develop hybrid image guidance and navigation for endovascular surgery by combining imaging using multi-axis robotically held C-Arms and advanced medical image computing, devices and visualisation. The project's specific clinical aim is to aid the interventional management of endoleaks following Endovascular aneurysm repair (EVAR) by testing the utility of existing intraoperative Dyna-CT and fusion imaging to allow in situ diagnosis and guide treatment. Intraoperatively the project will research how to provide automated robotic positioning to imaging planes pre-computed to fit the patient specific anatomy, precise catheter position tracking using and accurate image fusion utilizing stent appearance priors. The operating theatre and its system's capabilities will be integrated within a theatre wide view displaying workflow guidance and radiation expose density using modern 3D graphics rendering engines. These will facilitate dynamic targeting and management of endoleaks within complex 3D arterial territories and result in reduced overall procedure time, fluoroscopic time and contrast usage leading to better outcomes for the patient and reduced overall radiation exposure of clinical staff.

Supervisors:

Danail Stoyanov

Tara Mastracci 

Targeting good quality bone with an orthopaedic smart drill

The two most pertinent problems facing the ageing population are keeping physically and mentally active. We aim to keep the ageing population moving through longer lasting total hip replacement (THR). Demand for THR is forecast to substantially increase in both primary and revision THR (100% for both by 2030, in the USA alone, Kurtz et al). Key features of the market are Europe and USA, with the US accounting for nearly 50 % and Europe contributing to around 30 % of total procedures worldwide [Singh et al].  A survey including 18 countries with a total population of 755 million showed that on average for primary and revision total knee procedures there were 175 procedures per 100,000 population (range 8.8 per 100,000 population in Romania to 234 per 100,000 population in USA) [Singh et al].  Much growth in demand is occurring because younger patients are now likely to outlive their implants and require subsequent revision surgeries and Kurtz's prediction of 100% increase in demand for revision is likely to double when we consider the problem worldwide.

It is possible to map bone quality (left, above) but not possible to find good bone.

Whilst growth in the THR market looks inevitable, success of the implants used in terms of longevity, function and ease of surgical implantation requires pre-competitive research. For example, research can deliver improved surgical techniques, for all surgeons and all types of patients, is applicable to all manufacturers of implants, and therefore deemed "pre-competitive", will result in improved patient outcomes. We aim to do this using SMART surgical tools.

THR was described as the "operation of the 20th century" and reached the masses in the 1970s, so we are now dealing with the inevitable consequences of worn out hip replacements, 15, 20, and even 30 years down the line. Solving this involves sophisticated solutions to aid the surgeon in performing the (revision) operation 2nd, 3rd and even 4th time around. Our solutions aim to develop smart applications and computer guided tools for surgeons dealing with complex hip problems.

The aim of the proposed PhD programme is to develop an orthopaedic smart drill chuck to sense good quality bone and guide the surgeon to orientate the drill accordingly. 

The objectives of the proposed PhD are to: 

  1. Design and prototype a force/torque drill chuck sensor to be retrofitted into existing surgical drill tools
  2. Characterise and model bone using animal and human models
  3. Develop an algorithm that takes drill position/orientation/penetration, force/torque information and measures drill-to-bone friction as a function of bone quality
  4. Integrate drill tool with off-the-shelf 3D computer guidance instrumentation for position, orientation with respect to bone
  5. Conduct a short proof-of-concept trial to show the efficacy of the new smart drill

 

Supervisors:

Rui Loureiro

Alister Hart

Johann Henckel

Deformable models for Image Guided Surgery of the Abdomen

Image Guided Surgery (IGS) has been successfully utilised when the organ of interest can be reasonably approximated by a rigid body. Commercial IGS systems exist for brain surgery and orthopaedics for example. In areas such as the abdomen, many challenges still remain, as the organs are naturally deformable and undergo deformation due to pneumoperitoneum, breathing and the physical contact of surgical tools.

The Smart Liver Surgery programme has delivered a prototype laparoscopic guidance system to the Royal Free Hospital, and used it on 20+ patients. This system, and that of a current commercial competitor (CASCination AG) still use the rigid body assumption, even with its well-known limitations. The focus of both systems has been to deliver something to the clinic, and establish a working relationship with surgeons.

However, in order to be effective, deformation of the liver must be tackled. 

The aim of this project could include:

  • Develop a PCA type model of liver to model its natural variation, and investigate its use in statistically based registration of CT to laparoscopic video.
  •  Investigate Position-Based Dynamics using the NVIDIA Flex library, to model particle based deformations, and assess their suitability for liver surgery.
  •  Investigate different methods to to estimate biologically plausible deformations of the liver, from mesh deformation to Finite Elements by assuming healthy liver tissue to be homogeneous, isotropic and elastic, as reported in [https://www.ncbi.nlm.nih.gov/pubmed/21811013]
  • Collect liver data to inform biomechanical model and to build a database for cancerous liver tissue in order to account for disease and potentially, disease progression.
  • The project could be either a comparison of methods, or focussing on a particular approach. A key requirement would be something near real-time, so we could leverage local GPU processing or out-source to Cloud based GPU clusters.

If successful, the project will:

  • Deliver a state of the art platform for liver surgery in real time
  • Provide a model for image-guided surgery that takes into account liver deformation and is strongly anchored in physical properties of the liver, making the element of interpretation strong.

Supervisors:

Matt Clarkson/Vanessa Diaz

Kurinchi Gurusamy / Brian Davidson

Immersive Virtual Reality Haptic Simulator for Improving Surgical Outcomes in Complex Hip Revision

Hip surgery and tumour removal remains the mainstay of treatment for complex hip revisions. However, success is dependent on appropriate planning, which includes using computer tomography (CT) among other imaging techniques such as magnetic resonance imaging (MRI) scans to visualise the bone, tissues, the tumour and its characteristics. At present this provides a 2-D image of a 3-D problem. Virtual Reality (VR) makes it now possible to construct visual and physical 3D models of individual patient's anatomy including the tumour. By incorporating techniques, which analyse signal characteristics, highly accurate models can be created that facilitate the addition of physical interaction such as haptic (tactile and kinaesthetic) feedback to the model.

This will result in a highly accurate model providing far greater detail for surgical planning and also allowing a surgical 'dry-run' before the actual procedure, which will improve patient outcomes and encourage simulation training for more junior surgeons. These models could also be used to inform patients during their pre-surgery consultations allowing them to have a better understanding of the cancer and what the surgery entails. Although the idea of both virtual and augmented reality has been in the public consciousness for some time, there is little evidence of it being used beyond the acquisition of basic surgical techniques. Recently technological advances have allowed the development of haptic feedback that creates a more realistic surgical environment by being able to mimic tissue textures, weight and dynamics. 

The subsequent models that have been created have proven to be useful in developing further novel surgical skills. These concepts can be made more clinically relevant by directly applying expertise from computer programming and engineering to build personalised models of a patient's anatomy. This means that pre-operative surgical planning can be more individualised and allow surgeons to better understand the oncological and surgical anatomy (how to create the best maps to guide the surgeon's drill to avoid damaging other tissue), which underpin the success of surgery and survival outcomes. Currently there is no model developed that simulates complex revision hip surgery and due to the steep learning curve a realistic interactive model incorporating bone and soft tissues will be a valuable tool to aid in training. 

Aim and objectives

The project aim is to develop an immersive VR system for the training of complex revision hip surgery that will: 

  1. Use CT/MRI data to create the first patient-specific interactive virtual (and augmented) reality model for surgical planning and training;
  2. Provide realistic step-by-step 3D visualisation and tactile feel of the tissues to the user through a high-fidelity haptic feedback robotic system;
  3. Assess the model and VR operation accuracy and utility against the patient's recorded surgery in a live clinical environment.

 

Supervisors:

Rui Loureiro

Alister Hart

Translating Multiscale Simulation Tools to the Clinic: An in-vitro, in-vivo and in-silico approach for Aortic Treatment

Aortic Interventions are high on the list of critical vascular interventions.  Within the vast range of Aortic Conditions, Aortic dissections (AD) are particularly risky, with high morbidity and mortality rates. Computational fluid dynamics (CFD) can provide insight into the progression of AD and aid clinical decisions; however, oversimplified modelling assumptions and high computational cost compromise the accuracy of the information and impede clinical translation. To overcome these limitations, we developed a patient-specific CFD multiscale approach coupled to Windkessel boundary conditions and accounting for wall compliance, an entirely new approach in the literature of AD. 

In this project, this novel computational framework will be used in conjunction with a unique experimental setup to assess  bloodflow, pressure and other heamodynamic markers of interest of 3D printed, patient-specific Dissected Aortae. This setup is unique in the UK and has been the object of a BHF grant. 

The idea is that for each patient, a computational model will be created based on in-vivo data and validated via the in-vitro setup.  After initial calibration of the simulation model, different interventional strategies will be simulated in-silico, creating unique patient scenarios including the potential formation of thrombus, in order to provide clinicians with guidance about possible interventional strategies for each patient.  After a final interventional strategy has been defined a final run of the in-vitro setup for each patient will be performed mimicking the chosen intervention, in order to provide clinicians a comprehensive and confident analysis for that specific patient.

This PhD project will create a unique coupled simulation environment to simulate type B Aortic Dissections in-vitro and in-silico to provide clinical support for clinicians.

Supervisors:

Vanessa Diaz

Shervanthi Homer-Vanniasinkam

Stavroula Balabani

Organs-on-Chip: an in vitro and in silico microfluidic platform for vascular remodeling, atherosclerotic disease and drug discovery

This unique project will engineer an in-silico and in-vitro platform mimicking the atherosclerotic endothelium, to study permeability, inflammation and plaque formation using a combination of mathematical models and microfluidics. This atherosclerosis 'organ-on-a-chip' will produce results under patient-specific, pulsatile shear conditions. The platform comprises a sophisticated flow delivery system to generate physiological/pulsatile flows and an endothelium culture analog, used with microscopic imaging techniques to study cell-cell interaction, effects of shear on the endothelium, and macromolecules transport through the endothelium. Experiments will be combined with mathematical models, used in conjunction with patient data to provide a personalised, multiscale approach to vascular disease.

The aim of the study is to provide an integrated platform for personalized medicine and in particular to examine vascular physiology, atherosclerosis and pharmacokinetics. This will comprise an organ- on-a-chip approach combined with mathematical modeling. The organ-on-a-chip paradigm has been identified as a potentially disruptive technology, capable of producing fundamental change in healthcare.

An engineering graduate is required with good knowledge of fluids mechanics, strong analytical skills, passion for experimental work and preferably some experience of design, instrumentation and Matlab. The student will design and develop microfluidic tools to study cell-cell interaction and endothelium permeability , blood flow dynamics and rheology and will also work on modelling of endothelial cell behaviour and CFD, supported by an already established group of modellers based in Mechanical Engineering. Since this is a truly multi-disciplinary project, the ideal student will be a self- starter and will feel comfortable working with specialists in different disciplines. For this, excellent communication and interpersonal skills are essential.

Supervisors 

Stavroula Balabani

Vanessa Diaz

Janice Tsui

Ines Pineda Torra 

Virtual Reality, Patient-Specific Simulations and Experiments to Tackle Vascular Malformations: A Proof of Concept

In lay terms, vascular malformations (VMs) are abnormal connections of arteries and veins, meeting directly. Currently, common interventional treatment options available for patients with VMs include embolotherapy (occlusion of the VM vessels) and surgical debulking. Unfortunately, it is impossible to predict the outcome, including symptom relief and recurrence rate following these procedures. 

Furthermore, these procedures are often associated with significant and sometimes life changing risks including end organ ischaemia and infarction, bleeding, nerve injury and thromboembolism. Therefore, at present there is no guideline or consensus to help clinicians to decide on who should receive interventional treatment.

This project will develop a novel proof-of-concept to demonstrate how a set of scientific principles and technical tools can be used to help clinicians better plan and treat VMs.

The use of VR, patient-specific simulations and in-vitro emulators (using a unique patient-specific physical testing facility for bloodflow analyses developed by the applicants) is a combined approach that has never been used before, not only for the case of VMs but more broadly.  

We propose to use a sophisticated set of tools and models developed in-house to explore the use of this novel technological platform in a clinical setting on an area that is underdeveloped (clinically) in vascular surgery.  Albeit vascular malformations in the brain are a well-defined and challenging clinical area, the same cannot be said for vascular malformations anywhere else in the body.  First of all, it does not figure prominently in the clinical vascular training curriculum. Second, complications arise from heterogeneity in presentation and patient population as well as difficulties in deciding the best patient-specific treatment due to extremely abnormal vasculature and difficulties in assessing blood flow patterns due to limitations in clinical imaging as part of the clinical protocol. 

Supervisor

Dr Vanessa Diaz

Image guided proton radiotherapy for lung cancer using proton imaging

This project aims to improve the precision of proton radiotherapy for lung tumour. Lung tumour is an extremely difficult cancer to treat with radiotherapy due to the breathing motion and the presence of critical organ at risk (heart, spine, lungs) near the tumour. This explains the low increases of 5 years survival in the recent decades, from 12.2% in 1975 to 18.7% in 2016. 

Proton therapy promises to improve the treatment accuracy due to the dose deposition profile, highly concentrated at the distal end of the proton range. However, without any accurate image guidance, those benefits may turn to severe detriments as the dose may be delivered to critical healthy organs. Proton radiography prototypes exist that allows to acquire projection images in-room at a high rate (1-5 millisecond acquisition time).

The aim of this work would be to adapt the proton therapy treatment with radiographic images by alternating rapidly between the treatment and the imaging mode. This is expected to improve drastically the treatment accuracy.

The project falls directly within the EPSRC remit as it at the frontier of applied research (the development of proton radiography and proton radiotherapy for lung tumours) and translational research (application in the clinical environment with early prototype). The experimental evaluation on anthropomorphic phantoms will help prove the validity of the technique and the potential dose benefit.

Supervisor

Professor Gary Royle 

Deep Learning for Automated Registration in Laparoscopic Liver Surgery

Liver cancer is a major global health problem affecting an estimated 1.4 million people every year (2012). Surgery is the main curative option. Despite the well-known advantages of keyhole surgery only a small percentage (5-30%) can be given this choice, due to the increased difficulty and associated risks, relative to open surgery.

Image-guidance systems, including augmented reality have been proposed as a way to assist the surgeon and to reduce the level of risk thereby allowing a higher percentage of patients to benefit from keyhole surgery. 

CMIC has developed a keyhole surgery system that can display information from pre-operative scans such as Computed Tomography (CT) along with the live video image. The proposed CDT project will extend the existing system by developing novel learning algorithms for tissue classification, tool identification, tool tracking or real-time registration. 

Our aim is to produce the first clinically viable, accurate and easy to use system. This would impact patients undergoing liver surgery, but in addition, the technology would also be broadly applicable to clinical procedures on the pancreas, kidneys and gall bladder. 

The project fits within the EPSRC early stage of algorithm development (TRL 1-3), as it would entail first proof of concept of the basic science, developing the core learning algorithms. However, the proposal would additionally benefit from the wider programme of work in WEISS and CMIC, which includes an NIHR i4i Product Development Award, enabling data collection from patient cases.

Supervisor

Dr Matt Clarkson

Photoacoustic Imaging of Ablation for guiding and monitoring liver cancer treatment

Liver cancer is a major global health problem affecting an estimated 1.4 million people every year. Surgery is the main curative therapy but is suitable for only a minority of patients. Local ablative therapies (laser, radiofrequency, microwave, irreversible electroporation) can also be curative but are associated with a high rate of local recurrence. This is related to the difficulty of determining the adequacy of local treatment. Therefore a critical unmet need in liver cancer therapy is the ability to track the ablation zone in real-time during treatment in order to guide the procedure and assess efficacy. The aim of this project is to address this need by developing novel photoacoustic imaging techniques based on laser generated ultrasound waves for visualising ablation zones in tissue. 

The project will involve designing laser based laboratory apparatus and image processing techniques for imaging locally ablated tissues and perfused organs and ultimately measurements in vivo using a percutaneously placed instrument. If successful, this approach would reduce the local recurrence following local ablation therapy for liver cancer leading to improved oncological outcomes.

This project involves advanced optical engineering and instrumentation design, algorithm development, image processing and tissue characterisation studies of liver ablation therapy with a clear path to clinical application.

Supervisor

Professor Paul Beard

Reducing Contrast and Radiation using Artificial Intelligence in Endovascular Surgery

The project will develop algorithms to enhance the visibility of anatomical and device structures in fluoroscopy during endovascular surgery. We will develop image analysis techniques driven by machine learning and artificial intelligence techniques to highlight vessels and bifurcations without extensive use of image contrast agents. 

We will also propagate information in time in order to reduce the radiation exposure to the entire team. The potential impact is to reduce the adverse effects of procedures within an interventional suite both for the patient and the entire team.

Supervisor

Dr Danail Stoyanov

Machine-learning-based Planning and Evaluation of Image-guided Ablation of Solid- Organ Cancers

The overall aim of this project is to explore the application of machine learning techniques to develop the underpinnings of clinical tools for planning and evaluating minimally-invasive tumour ablation as a treatment for cancer in the liver, kidney, and other abdominal organs. In particular, the project student will analyse retrospective imaging data collected before and after ablative procedures with a view to identifying and learning relationships between the characteristics of the ablated tissue region and the incidence of short-term and long-term treatment outcomes and treatment-related complications. Where a clear causal relationships are established, the focus will move to determining which treatment parameters, such as tumour location, the size of treatment margin, dose distribution, etc., are associated with treatment success or failure, and using this information to develop an algorithm to predict the optimal treatment parameters for the individual patient given input image data (with additional diagnostic data as appropriate). In terms of methodology, the project will draw upon state-of-the-art methods in machine learning, combined with established and novel medical image analysis and computational modelling techniques (for instance, biophysical modelling of thermal tumour ablation).

Supervisors

Dr Dean Barratt 

Dr Steve Bandula

Dr Yipeng Hu

 

Improving Ultrasound Imaging using Machine Learning Techniques that incorporate Expert Behaviours

Ultrasound is the most widely performed medical imaging technique in clinical practice, but is highly operator dependent and requires significant training, skill and experience to perform competently. As a result, both image quality and the reliability of diagnostic information derived from ultrasound images can vary considerably, even between trained operators.

Recent developments in machine learning have demonstrated the possibility of learning from data provided by "domain experts", which can be applied to assist non-expert operators to learn and perform complicated tasks, enabling them to achieve performance levels comparable to those of an expert practitioner. Drawing on these techniques and translating them into a clinical setting, a computer-assisted system, trained using example ultrasound scans performed by an expert operator, can be envisaged that suggests actions that improve a novice operator's ability to navigate. For instance, instructions provided by the system enable them to place the ultrasound probe at the optimal location on the skin surface and orient the probe to obtain high-quality, standardised views of anatomical structures from which key clinical measurements can be obtained.

The overall aims of this project are to design, develop, and test such a system, with a focus on fetal imaging. 

Supervisors

Dr Dean Barratt

Mr Raffaele Napolitano

Dr Yipeng Hu 

Photoacoustic imaging with ultrasound-assisted sound speed correction

Photoacoustic imaging is a novel hybrid imaging modality that relies on the generation of ultrasound waves by the absorption of short laser pulses in biological tissue. Its fundamental advantage derives from the fact that it encodes tissue optical absorption on to ultrasonic waves which are minimally scattered in soft tissues. It thus provides both the high spectrally selective contrast of optical imaging techniques and the high resolution of ultrasound. 

Photoacoustic imaging is particularly well suited to visualising vascular anatomy due to the strong absorption exhibited by haemoglobin. As a consequence, it provides significantly higher label-free vascular contrast than existing imaging modalities such as ultrasound. This offers new opportunities for delineating tumour margins to aid cancer treatment planning, identifying major blood vessels to help guide fetal or laparoscopic surgery and monitoring minimally invasive ablative therapies used in cardiovascular medicine.

Whilst the advances in PA imaging are encouraging, image quality for many clinical applications can be limited by the natural variation in the sound speed between tissue types. The variations aberrate the photoacoustic waves passing through the tissue, resulting in artefacts and blurring in the image. Incorporating additional information about the sound speed variations in the image reconstruction algorithm is expected to improve the image quality.

The outcome of the project will be an image reconstruction framework that provides improved image fidelity and resolution by accounting for sound speed variations. This will be benefit almost any clinical application of PA imaging by enabling the visualisation of previously indistinguishable anatomical structures and providing a more accurate representation of their morphology; for example, it will provide more precise visualisation of tumour margins yielding more accurate guidance of their surgical excision in laparoscopic liver surgery. More accurate image reconstruction will also contribute to the development of functional photoacoustic imaging for which high spatial fidelity is an essential prerequisite and currently a limiting factor.

The project will involve working with acoustic propagation models, the development of novel image reconstruction methods and their computation implementation as well as generating and/or working with measured data acquired through laboratory and clinical studies. It will suit students with an interest in wave physics, computational methods and a desire to see their work translated to practical application in medicine.

Supervisors

Professor Paul Beard

Dr Andrew Plumb

Dr Ben Cox / Dr Bradley Treeby 

Flexible, wireless pressure sensor for endoleak monitoring

Abdominal Aortic Aneurysm (AAA) is a condition in which the artery delivering blood from heart to the rest of the body dilates or 'bulges'. AAA is particularly severe in the elderly, AAA ruptures cause 6,000 death/year in the UK alone. Thus, in 2014 NHS introduced AAA screening. The treatment can be either open surgery, or increasingly, minimally invasive endovascular aneurysm repair (EVAR) in which the aneurysm is excluded by placing a stent-graft in the aneurysm. Currently, over 50% of cases can be treated by EVAR - NHS cost/patient is ~£20k. 

Despite the success of EVAR in treating AAA, there is a condition called 'endoleak' which occurs in 30% of EVARs, in which blood continues to leak into the aneurysm sac. If an endoleak results in an increase in the sac size and pressure, then this can lead to sac rupture which can be fatal. Hence, there is a real need for doctors to be able to monitor and measure sac pressures after an aneurysm is treated by EVAR. This project is directly motivated to address this major clinical need.

The work will comprise of making the pressure sensors, integration of the sensor with a wireless link and optimizing the sensor design using clinical guidance regarding endoleaks. 

Supervisors

Dr Manish Tiwari 

Dr Shervanthi Homer-Vanniasinkam 

Smart gloves for obstetric surgery

Despite being the most common interventions, obstetric surgery (Caesarean section, instrumental delivery and perineal repair) has not any major technological developments since the 1970s when Ventouse suction cups were developed. Factors such as rising maternal age, fertility techniques, and more liberal use of Caesarean delivery have rendered obstetrics to now be an acute surgical specialty. Over a third of UK women deliver by Caesarean Section (>50% emergency); another one-fifth have an instrumental vaginal delivery with associated perineal repair. 

 There is little research on improving acute obstetric surgical care for the mother and fetus particularly in engineering, medical physics and imaging. Interventions rely on digital vaginal examination and assessment of fetal head position, cervical dilatation and advancement of the fetal head through the pelvis during birth. Last year the NHS Litigation Authority (NHSLA) spent £1,1 billion on clinical negligence claims, of which 41% (almost £1/2billion) was for obstetric claims, mainly paid to brain-damage in children as a result of complications during labour and delivery. Two most frequently encountered needs are: 1. Safer disimpaction of the fetal head at Caesarean section. 2. Manual rotation of the fetal head when there is malposition, in addition to or instead of ventouse or forceps.

Given the above laid out clinical need, we seek to design a new class of smart/sensorised surgical gloves that can be used regulate the manual force applied on the fetus during vaginal delivery. Using engineering expertise in high-resolution 3D printing and materials processing, an array of flexible force sensors will be integrated on surgical gloves and provide real time force feedback and tactile imaging of fetus. Such a sensorised (smart glove) will enable safe rotation of the fetus head into favourable position for either the mother to push it out or for simple extraction with ventouse or forceps, as opposed to rotating with instrument which might be detrimental to baby (e.g. scalp haemorrhage, facial palsy) and mother (pelvic floor injury).

The PhD student will work with supervisory team from UCL Engineering, UCL surgical and interventional to develop the smart surgical gloves. The clinical supervision team have lead role in UCLH and its role as a specialist centre for women with complex pregnancies and placental attachment disorders such as placental previa. Thus the current project will have a direct impact on improving the neonatal outcomes through new instruments to improve surgical delivery, and to reduce maternal morbidity.

The aim is to improve neonatal outcomes through new instruments to improve surgical vaginal delivery, and to reduce maternal morbidity. 

Supervisors

Dr Manish Tiwari and Dr Adrien Desjardins 

Dr Anna David 

Dr Dimitrios Siassakos

Development of ultrasensitive sensor arrays for deep tissue photoacoustic imaging

Photoacoustic imaging is a novel hybrid imaging modality that relies on the generation of ultrasound waves by the absorption of short laser pulses in biological tissue. Its fundamental advantage derives from the fact that it encodes tissue optical absorption on to ultrasonic waves which are minimally scattered in soft tissues. It thus provides both the high spectrally selective contrast of optical imaging techniques and the high resolution of ultrasound. Photoacoustic imaging is particularly well suited to visualising vascular anatomy due to the strong absorption exhibited by haemoglobin. As a consequence, it provides significantly higher label-free vascular contrast than existing imaging modalities such as ultrasound. This offers new opportunities for delineating tumour margins to aid cancer treatment planning, identifying major blood vessels to help guide fetal or laparoscopic surgery and monitoring minimally invasive ablative therapies used in cardiovascular medicine.

Whilst the advances in PA imaging are encouraging, there exist several significant instrumentation related challenges. One of these relates to the detection of PA signals. Piezoelectric receivers are most widely used but have several shortcomings. Achieving the high sensitivity required to detect the extremely weak PA signals generated at cm scale depths in tissue requires large element sizes and resonant material compositions leading to narrow directivity and poor frequency response characteristics. These factors negatively impact on image quality by introducing artefacts, blurring and distortion. 

Optical ultrasound sensors offer an alternative that can address these limitations. One method that has shown particular promise is the use of a polymer film Fabry Perot (FP) etalon as an ultrasound sensor, an approach pioneered at UCL. This can provide exquisite image quality, which is a consequence of the very small effective acoustic element size (<50μm) and uniform broadband frequency response it provides. However, limited sensitivity makes it challenging to achieve penetration depths beyond 5-10mm. This is insufficient for a number of important potential clinical applications such as visualising tumour margins deep within the liver to guide surgical excision, identifying non superficial tumours within the breast to aid pre-treatment planning or assessing cancerous nodes in the neck in some patient groups. 

The aim of the project is to address these limitations by developing a novel instrument that exploits a new type of ultrasound sensor based on an optical microresonator. By virtue of an extremely high Q factor, this type of sensor offers the prospect of two orders of magnitude higher sensitivity than the FP etalon sensor enabling penetration of depths of 2-3cm to be achieved. This would represent a step change in PA imaging performance and in doing so pave the way for in vivo high resolution human imaging at depths currently unattainable thereby extending clinical applicability. The project will involve the fabrication of novel polymer optical microresonator sensors, the development of advanced parallelised optical read-out schemes for real-time image acquisition and engineering a prototype imaging instrument for use in clinical studies.

The project is largely experimental and will suit students interested in working with optical sensors, pulsed and CW lasers, fibre optics, ultrasonic devices, electronic control and time-resolved optical readout systems and a desire to see their work translated to practical application in medicine.

Supervisors

Professor Paul Beard

Dr Andrew Plumb

Dr James Guggenheim

Modelling and Simulation Tools for Aortic Dissection Treatment

To develop tools for the pre & post-operative in-silico planning and treatment for Aortic Dissections. This project will be an exemplar application that will provide a blueprint and proof of concept for the use of in-silico tools in the personalised management of aortic diseases in general (applicable also to other pathologies).  The clinical impact and viability of these technologies will be tested via our partner Hospitals in the UK and Europe.  These models will be validated using a sophisticated experimental setup available (including PIV) within the group.

Expected Results:

· A patient-specific simulation framework delivering simulation of Aortic Dissections

· Proof-of-concept of the use of these technologies in the clinic

· Proof of feasibility of the use of these models in surgical planning

· Validated set of results using a patient-specific physical platform.

Supervisor

Vanessa Diaz

Multiscale Modelling, Simulation and Machine Learning Tools to Predict Graft Failure

A 3-year PhD studentship is available at UCL Department of Mechanical Engineering for an enthusiastic student interested in Multiscale Modelling and Simulation and Machine Learning Tools to Understand and Predict Graft Failure.

About 40% of lower extremity vein grafts occlude or develop significant stenosis already within the first year after implantation. Results for more complex procedures to the calf vessels have usually slightly worse prognosis, with resultant serious morbidity and mortality. In a clinical landscape with ever-increasing and more aggressive bypass procedures, the use of novel engineering simulation tools to understand venous adaptation to the arterial environment and the development of classification tools to understand patient-specific variability would help preventing the significant numbers of excess complications, mortality and cost of re-interventions. This project will create a flexible multi-scale modelling framework to engineer better outcomes for vascular patients undergoing bypass procedures (vein-grafts) and will harness the power of machine learning tools to understand individual variability, classify patients' risk and predict individual patients' outcomes.

Supervisor

Vanessa Diaz

Past PhD Projects can be viewed here:

Clinical translation and development of a novel robotic microscope to Epilepsy surgery

Background: Epilepsy has an incidence of 30,000 new cases in the U.K per year. One third of these patients will not achieve adequate seizure control through medication alone. Surgery can provide a cure in patients when the source of the seizures can be identified. A significant proportion of patients do not have a structural abnormality on MRI scans. Safe surgical resection requires meticulous planning to delineate the region for resection whilst ensuring eloquent regions of the brain are not compromised. Current imaging technologies, such as functional MRI and DTI tractography provide the surgeon with information about critical cortical and subcortical structures. Presentation of this information to the neurosurgeon at critical times during surgery have been shown to predict and reduce post-operative neurological deficits.

 

Translational technology: Robotically operated video optical telescopic-microscope (ROVOT-M) (Synaptive Medical) is a novel fully digital microscope mounted on a robotic arm that automatically aligns to surgical instruments, providing optimal viewing angles. The device provides high fidelity visualisation of the entire surgical field with hands-free manipulation replacing the need for a conventional microscope. We propose the clinical translation of this device into surgical practice for the following potential applications:

1) The fully digital microscope has the ability to capture wavelengths outside of the visual spectrum. Different biological tissues have variations in optical penetration and scatter which could act as a spectral fingerprint based on the tissue composition. Some disease processes are indistinguishable from normal brain at a macroscopic level, such as the margins of a tumour or developmental cortical malformations. It remains a challenge for the operating surgeon to identify potential remnants of pathological tissue that visually appears the same as normal brain when aiming for gross total resection. Intra-operative hyperspectral imaging may allow in vivo pathological detection at a molecular level and has the potential to provide non-destructive histological diagnosis or grading through spectral feature detection without the need for a Raman laser. The fingerprint could be further characterised by the administration of pathology specific fluorescence agents such as 5-ALA used in glioma surgery.

2) The robotic arm utilises an optical tracking system that aligns the digital microscope to the working channel. This provides the potential to develop novel minimally invasive 'keyhole' operative corridors with viewing angles that would otherwise not be possible or ergonomic for the operating surgeon. Through interplay with the integrated neuronavigation system the device has the potential to provide an augmented reality display to the surgeon. Cortical parcellations can be overlaid onto the operative view with the potential for real time anatomical segmentation and brain shift correction. Subcortical structures through DTI tractography can be visualised in the operative field and preserved. Automatic instrument segmentation and tracking may provide the ability to correct for tremor, monitor tissue handling and evaluate surgical performance.

 

Applicability: Here we describe the clinical translation of this technology for Epilepsy surgery in order to maximise safe resection and provide optimal chance of seizure free outcomes. These principles, however, are applicable to any image guided micro-neurosurgery such as neuro-oncology and spinal surgery.
 

Supervisors:

Sebastien Ourselin

John Duncan

Vejay Vakharia

Machine learning in the clinical and semiological evaluation of patients with Epilepsy

Background: Epilepsy affects 1% of the population and has significant social, psychological, cognitive and psychiatric sequelae on patients. Medically refractory epilepsy can be treated surgically if the focus in the brain can be defined. The evaluation of patients with epilepsy is complex and involves the integration of clinical history, seizure semiology, neurophysiology (EEG) and multimodal imaging. Each of these investigations requires an expert in the respective medical disciplines interpret the results. Being able to combine the information and correlate this to define the seizure onset zone remains a challenge. EpiNav (CMIC, UCL) is a novel software platform that allows 3D model generation and multimodal imaging to be registered in the same stereotactic space for the purpose of stereoelectroencepalography (SEEG) electrode planning. We aim to use machine learning to integrate the clinical history, seizure semiology and seizure motion tracking with this platform to help predict putative seizure onset zones and suggest targets for electrode placement.

Translational technology: 3D-multimodal imaging greatly improves spatial understanding of structural brain abnormalities in relation to functional, ictal and interictal imaging modalities. Current use of the software centres around both manual and automated SEEG planning. Currently planning of electrodes is based on the multi-disciplinary assessment of seizure semiology and neurophysiological findings. We aim to integrate machine learning in the following ways to extend the use of EpiNav as a platform for clinical assessment of seizure phenotype that will be complementary and enhance its current role by:
1)    Developing a library that maps auras and features of epileptic seizures, with scalp EEG findings, to the involvement of specific parts of the brain using our library of 160 parcellated regions. By assigning weights to specific semiological features we aim to build up a 3D brain map of putative seizure onset zones that will be further refined by the multimodal imaging and scalp EEG.  This will greatly enhance the formulation of an intracranial sampling strategy, with EEG electrodes placed in the brain.
2)    Characterise seizure movements using Kinect (Microsoft): We intend to use the motion tracking technology integrated within the Kinect to tract seizure movements in patients who have undergone SEEG and scalp implantations.
3)    We will include facial recognition software, extended to analyse videos, and sound detection algorithms such that with a large enough cohort of patients and utilising machine learning we hope to characterise and localise stereotyped seizure phenotypes to specific brain regions.

Applicability: The Epilepsy service at NHNN has a large throughput of patients for both non-invasive and invasive evaluation making it an ideal environment for machine learning. EpiNav is currently used as a sophisticated platform to integrate multimodal imaging information and plan electrode placement and model surgical resection zones. Utilisation of EpiNav with Kinect motion detection and a semiological library will provide a novel platform with the ability to potentially interpret the clinical features of a seizure with the neurophysiological recordings and suggest potential seizure onset zones. These zones will act as spatial priors to help suggest potential targets for computer-assisted planning of SEEG electrodes.

Supervisors:

Sebastien Ourselin

John Duncan

Parashkev Nachev

Design, Modelling and Control of a Multi-Arm Snake Robot for Optic Nerve Interventions

Potentially blinding diseases of the optic nerve affect millions of people worldwide. The inaccessibility of the optic nerve, however, located deep behind the eye globe, within the eye socket, makes interventions at that location almost impossible. Therefore, despite the variety of micro-tools and regenerative medicine approaches that are being developed, their delivery to the location of interest is still elusive and new tools are required.
This PhD project is part of a collaborative research endeavour between Moorfields Eye Hospital and University College London to create a multi-arm exteroceptive teleoperated snake robot for ultra-minimally invasive optic nerve regeneration via stem cell delivery. The robot will shift established surgical boundaries by navigating peri-ocularly, i.e., flexing around the eye globe and via the orbital muscles, to reach the optic nerve and deploy regenerative stem cells.
The proposed robot will be one of the first multi-arm redundant miniature flexible robots. It will have 6 degrees-of-freedom (DoF) at the tip via its shape-controllable shaft, and multiple arms. Three tools, each with 2DoF, will be housed within the flexible functionalised robot sheath to hold a camera and two instruments.
A student, of mechanical engineering background, will conduct PhD Research on continuum/flexible robots that address the strict robot/tissue interaction constraints of micro-surgical applications. The important unexplored topic that his PhD research will focus on is coupled multi-arm flexible robot modelling. The innovative multi-arm characteristic of the robot poses scientific challenges, as the final shape is informed not only by the mechanics of the backbone, but also of each individual arm. Thus, the coupled mechanics of all flexible robot components should be developed, and forward/inverse kinematics and dynamics solutions need to be found. The student will be involved both in the research and implementation of the project, improving his/her theoretical knowledge in robotics as well as his research-translation capabilities.

Supervisors:

Dr. Christos Bergeles

Prof. Lyndon da Cruz

Light-field Imaging for Surface and Sub-Surface Visualisation

Classic cameras project the multi-dimensional information of the light flowing through a scene into a single 2D snapshot. Plenoptic, or light-field, cameras, on the other hand, capture a 4D slice of the plenoptic function, termed the ``light-field''. These cameras provide both spatial and angular information on the light flowing through a scene; multiple views of the scene are captured in a single photographic exposure.
The key characteristic of light-field cameras is the presence of a micro-lens array in front of the image sensor. This micro-lens array resamples the incoming light, rather than simply allowing it to focus on the 2D pixel. This way, different angles of rays hit distinct parts of the image sensor, and spatial and angular information of the incoming light is preserved.
The angular information complements the spatial information towards the recovery of accurate 3D information of the observed scene and provides the ability of synthesizing novel views and digitally refocusing an image after it has been taken. We are interested in developing medical imaging systems that take advantage of those unique characteristics, and push further the capabilities of endoscopic imaging.
More specifically, we are interested in 2 key aspects which can be investigated from both their computational aspects and integrated system development during a PhD project.
First, what is the optimal way to design endoscopes and ophthalmoscopes given constraints in the dimensions, numerical apertures, desired depth of field and field of view? To assist in this task, the student will further develop a simulation engine that can render scenes given a designed optical system. Computational contributions to this simulation engine, to include optimisation of the micro-lens array, given the specification will improve the students programming skills and lead to a tool that can be used for the design of light-field endoscopes and ophthalmoscopes. Implementation and evaluation of the design systems on the benchtop can then be performed with support from existing funding sources.
Second, we are interested in examining how tissue information can be incorporated and retrieved through light-field imaging, when near infra-red frequencies are emitted. The goal would be to update the simulation engine described above to incorporate attenuation information and light propagation within tissue. We believe that, given the angular information of capture light, subsurface information can be retrieved as a coupled optimisation problem of 3D structure and tissue properties. Benchtop experiments with well characterised tissue phantoms and the systems designed in the previous steps can quantify the capabilities of light-field for sub-surface imaging.
The student will be supported by an optomechatronics expert in a mentoring role, a senior PhD student in computational light-field imaging, and the supervisor. That way, a multitude of complementary skills will be acquired, from system design to system implementation and evaluation.

 

Supervisors:

Dr. Christos Bergeles

Dr. Pearse Keane

Prof. Chris Dainty

Dynamic texture models for organ detection and skills analysis from ultrasound

The use of ultrasound (US) imaging is widespread as a method for medical diagnosis and intervention guidance as it enables real-time visualisation of internal body structures, such as internal organs, vessels, tendons or joints, as well as fetal imaging during pregnancy. However, one of the challenges of extracting relevant clinical information from US scans is the dependency of the image acquisition and clinical evaluation on a skilled operator.
The aim of this project is to use statistical analysis of the dynamics of ultrasound image texture to develop a generative model for categorizing video sequences in 2D+t ultrasound images. In clinical practice, this could enable automated organ identification by defining efficient computational schemes, and the aim would be to make this process as independent from pose, viewpoint and individual differences in organ anatomy as possible. Ultimately, the overall goal would be to implement a rapid and robust spatio-temporal organ identification method. Additionally, as US imaging is highly dependent on the skills of the operator, a secondary objective would be to evaluate the operator skills from the dynamics of the images.
To achieve real-time recognition of various organs in ultrasound (US) 2D+t videos, the project will rely on scale- and rotation-invariant approaches and will exploit the spatio-temporal structure of the video signal. Ultrasound video databases will serve as reference training datasets. The tools developed in the course on this project will rely on advanced signal processing methods such as wavelet and scattering transforms, statistical methods such as classification and codeword generation, machine learning such as deep scattering networks and methods from dynamic systems such as state-space analysis. The idea of this research is to learn and extract an invariant lower dimensional representation of the video using the dynamics of the local texture (the temporal structure of the spatial variations in the US signal). By comparing these local texture patterns with different previously attained US video samples of different organs, organ localisation will be achieved. Such a comparison could be a direct one as done in content-based video retrieval approaches or could be indirectly performed through learning of classifiers.
From a scientific point of view, this research topic is highly timely and relevant as it introduces recent well-posed methods from the machine learning and statistical signal processing communities to the medical imaging one. To the best of our knowledge, while deep learning methods are now commonly used in the medical imaging community, this is not the case for the more recent and more interpretable scattering transform including deep scattering networks.
From an application point of view, given the fact that 2D+t ultrasound is a very prevalent imaging modality, being accessible to the public using portable and affordable equipment, improvement in the autonomous analysis of ultrasound images will have a significant influence on public health monitoring. 

Supervisors:

Tom Vercauteren

Anna David