XClose

UCL Centre for Medical Image Computing

Home
Menu

Projects 2023

MedICSS includes interactive sessions throughout the week, including group mini projects. 

FetReg: Placental Vessel Segmentation and Registration in Fetoscopy

Leaders: Sophia Bano, Francisco Vasconcelos

Fetoscopy Laser Photocoagulation (FLP) is a widely used procedure for the treatment of Twin-to-Twin Transfusion Syndrome (TTTS). In TTTS, the flow of blood between the two fetuses becomes uneven as a result the donor experiences slow growth while the recipient is at risk of heart failure due to the excess of blood it takes. During FLP, the abnormal vascular anastomoses are identified, and laser ablated to regulate the flow of blood. The procedure is particularly challenging due to the limited field–of–view, poor manoeuvrability of the fetoscopy, poor visibility due to fluid turbidity and variability in light source, and unusual position of the placenta. This may lead to increased procedural time and incomplete ablation, resulting in persistent TTTS. Computer-assisted intervention can help overcome these challenges by expanding the fetoscopic field-of-view and providing better visualization of the vessel map. This in turn can guide the surgeons in better localizing abnormal anastomoses.

This project aims at using supervised image segmentation models for segmenting the placental vessels and performing direct image registration on the segmented vessel maps for generating a consistent mosaic of the intra-operative environment [1]. The project will utilise the publicly available Placental Vessel Dataset [2] and will also provide the basics for participating in the MICCAI2021 EndoVis FetReg Challenge [3] to interested students. The FetReg challenge was featured as the challenge of the month in Computer Vision News magazine June 2021 issue [4].

Technical Requirements:
– Basic understanding of the image segmentation and registration techniques
– Hands on experience of using a deep learning framework (Pytorch/Tensorflow)

Useful links:
[1] Bano, S., Vasconcelos, F., Shepherd, L.M., Vander Poorten, E., Vercauteren, T., Ourselin, S., David, A.L., Deprest, J. and Stoyanov, D., 2020, October. Deep placental vessel segmentation for fetoscopic mosaicking. In International Conference on Medical Image Computing and Computer-Assisted Intervention (pp. 763-773). Springer, Cham. arxiv.org/pdf/2007.04349.pdf
[2] Placental Vessel Dataset: www.ucl.ac.uk/interventional-surgical-sciences/fetoscopy-placenta-data
[3] endovis.grand-challenge.org/
[4] www.rsipvision.com/ComputerVisionNews-2021June/22/

Tractography: mapping connections in the human brain

Leaders: Ellie Thompson, Anna Schroder

Description: Tractography [1] is currently the only tool available to probe non-invasively the structural connectivity of the brain in-vivo. However, tractography is subject to extensive modelling errors, causing a large number of false positive and false negative connections in the resulting connectivity matrix [2]. Despite these drawbacks, tractography has revolutionised our understanding of the brain’s connectivity architecture over the past few decades, and has wide-spread potential applications, from surgical planning [3] to modelling the spread of neurodegenerative diseases through the brain [4]. 

This project will introduce participants to the basic principles of tractography. Participants will have the opportunity to implement tractography algorithms from scratch, and compare these results to state-of-the-art tractography software tools in MRtrix3 [5]. 

References:
[1] Jeurissen, B., et al. (2019). Diffusion MRI fiber tractography of the brain. NMR in Biomedicine, 32(4), p.e3785.
[2] Maier-Hein, K.H., et al. (2017). The challenge of mapping the human connectome based on diffusion tractography. Nature communications, 8(1), pp.1-13.
[3] Walid E.I. et al (2017). White matter tractography for neurosurgical planning: A topography-based review of the current state of the art. NeuroImage: Clinical. 15 pp659-672
[4] Vogel, J., et al. (2020). Spread of pathological tau proteins through communicating neurons in human Alzheimer’s disease. Nature Communications 11, 2612 
[5] Tournier, J.D., et al. (2019). MRtrix3: A fast, flexible and open software framework for medical image processing and visualisation. NeuroImage, 202, p.116137.

Prerequisites:
– Knowledge: MATLAB or python
– Equipment: Own laptop, no GPU needed

Deep Learning for Medical Image Segmentation and Registration

Leader: Yipeng Hu - Associate Professor

One of the most successful modern deep-learning applications in medical imaging is image segmentation. From neurological pathology in MR volumes to fetal anatomy in ultrasound videos, from cellular structures in microscopic images to multiple organs in whole-body CT scans, the list is ever expanding.

This tutorial project will guide students to build and train a state-of-the-art convolutional neural network from scratch, then validate it on real patient data.

The objective of this project is to obtain
1) basic understanding of machine learning approaches applied for medical image segmentation,
2) practical knowledge of essential components in building and testing deep learning algorithms, and
3) obtain hands-on experience in coding a deep segmentation network for real-world clinical applications.

Prerequisites: Python, GPU (via Colaboratory)

IQT: Image Quality Transfer

Leaders: Matteo Figini, Ahmed Abdelkarim

Image Quality Transfer (IQT) is a machine learning based framework to propagate information from state-of-the-art imaging systems into a clinical environment where the same image quality cannot be achieved [1]. It has been successfully applied to increase the spatial resolution of diffusion MRI data [1, 2] and to enhance both contrast and resolution in images from low-field scanners [3]. In this project, we will explore the deep learning implementation of IQT and investigate the effect of the different parameters and options, using MRI data from publicly available MRI databases. We will also test the algorithms on clinical data to assess the enhancement of images from epilepsy patients.

Prerequisites: GPU, Tensorflow

References:
[1] D. Alexander et al., “Image quality transfer and applications in diffusion MRI”, Neuroimage 2017. 152:283-298
[2] R. Tanno, et al. “Uncertainty modelling in deep learning for safer neuroimage enhancement: Demonstration in diffusion MRI.” NeuroImage 2021. 225
[3] M. Figini et al., “Image Quality Transfer Enhances Contrast and Resolution of Low-Field Brain MRI in African Paediatric Epilepsy Patients”, ICLR 2020 workshop on Artificial Intelligence for Affordable Healthcare  Teleoperated Control and UI for an Ophthalmic Surgical Robot for Posterior Segment Surgery 

 

Robust quantitative mapping of brain features from MRI using deep neural networks

Leader: Christopher Parker

Description:  Quantitative mapping of brain features from MRI has the potential to improve healthcare of neurological disorders by facilitating a shift from subjective to information-driven assessments. However, deriving these maps using current techniques can be time-consuming, prone to error and give overly confident estimates. This makes it laborious and potentially untrustworthy, preventing its adoption in the clinic. This project aims to address this by incorporating statistical principles into deep learning frameworks to provide improved speed, accuracy, and confidence estimates. As part of the project the student will get hands-on experience with developing neural network architectures and an understanding of brain mapping from MRI for clinical application.

Pre-requisites: Python 

Useful links:
[1] Practice with this implementation: https://github.com/sebbarb/deep_ivim
[2] Publication describing the neural network architecture with example application:
Barbieri, S., Gurney‐Champion, O.J., Klaassen, R. and Thoeny, H.C., 2020. Deep learning how to fit an intravoxel incoherent motion model to diffusion‐weighted MRI. Magnetic resonance in medicine, 83(1), pp.312-321.

 

Teleoperated Control and UI for an Ophthalmic Surgical Robot for Posterior Segment Surgery

Leaders:Ning Wang, Aleksandra Goch, Agostini Stilli

Description:Description: The challenges associated with fundus retina surgery are numerous, including but not limited to the restricted workspace of the tools, the inherent natural jitter of the clinician's arm, the lack of depth information between the surgical instrument and the fundus tissue under the microscope view, and the lack of haptic feedback associated with the small contact force (10 mN) between the surgical instrument and the fundus tissue. Despite current research efforts, the accuracy of fundus surgery and its success rate, particularly in Internal Limiting Membrane (ILM) and Epiretinal Membrane (ERM) peeling surgery, have yet to reach the desired level.
This project aims investigate solutions to overcome these challenges developing and optimising both the controller and the software for the user interface of a teleoperated controlled robot for posterior segment surgery developed by the research team at UCL WEISS firstly presented in [1]. The focus will be on teleoperation, visual feedback and force feedback. The robot currently relies on a commercially available user interface (Sigma 7, Forcedimension [2]), but custom user interfaces could be also explored. This project will guide the participants to understand teleoperated control and develop their control interface for eye surgery robots. Participants will have the opportunity to operate the robot to experience the actual operation with an eye phantom and apply their interface design.

Prerequisites: Python/C++

Reference:  [1] Wang N, Zhang X, Li M, et al. A 5-DOFs Robot for Posterior Segment Eye Microsurgery[J]. IEEE Robotics and Automation Letters, 2022, 7(4): 10128-10135.
[2] https://www.forcedimension.com/software/sdk

 

Machine Learning and Deep Learning for Covid-19 Detection on Chest X-ray (CXR) Images

Leaders: Shahab Aslani, Gabrielle Baxter, Mehran Azimbagirad

Description: Medical image analysis using artificial intelligence (AI) techniques transforms our understanding of health and disease. Recently, the application of machine learning (ML) and deep learning (DL) techniques has exponentially increased in medical imaging to identify diseases automatically. This project aims to explore the applications of ML and DL models in covid-19 detection using CXR images. Particularly, participants will learn how to process the CXR images (including normalization, lung and diaphragm segmentation) and to use ML/DL models to identify the covid-19 disease in the images. 

Prerequisites: Python, PyTorch, GPU

 

Surgical Gesture Recognition for Suturing in Robot-Assisted Surgery

Leaders: Dimitrios Anastasiou, Chloe He

Description: Automatically recognising surgical gestures from surgical data is an important building block of automated activity recognition and analytics, technical skill assessment, intra-operative assistance, and eventually robotic automation. This tutorial project focuses on time-series analysis of visual features captured during robot-assisted procedures. The dataset you will be using was captured using the da Vinci Surgical System (dVSS) from eight surgeons with different skill levels performing suturing on a bench-top model. It will provide hands-on experience to participants in developing, deploying, and evaluating state-of-the-art temporal machine learning architectures for video and time-series analysis. 

Further Details: 
Hardware: Own laptop
Software: Intermediate Python and PyTorch skills

Robot-assisted spectral imaging of perfused organs

Leaders: Morenike Magbagbeola, Alexander Saikia

Description: Traditional minimally invasive surgery (MIS) and organ viability assessment is performed using red-green-blue (RGB) cameras and subjective validation from surgeons respectively. Hyperspectral imaging (HSI) is a relatively novel imaging modality that extracts both spatial (morphological) and spectral (biochemical) information and therefore offers a new approach to both tasks. To test and develop systems for MIS using HSI, we have developed a custom-built perfusion machine from low cost components and successfully validated its use for perfusing porcine livers and some other organs.  

This tutorial project will give the student a hands-on experience in using cameras mounted to a robotic arm to gather HSI data for analysis and development of novel algorithms for HSI data processing. The objectives of this project are to obtain: 1) a basic understanding of using and programming robotic arms; 2) experience in scientific data acquisition; 3) a foundation in HSI analysis. 

Prerequisites: Python, ROS or Pytorch recommended but not required