UCL EPSRC Centre for Doctoral Training in Intelligent Integrated Imaging in Healthcare


Enabling technology portfolio

Our enabling technology portfolio provides underpinning technologies in AI and machine learning, data science, robotics and sensing, and human-computer interaction.

AI and Machine Learning 

AI and machine learning has the potential to enhance every step of the imaging and image analysis pipeline, from designing new acquisition sequences and protocols, to learning transformations and priors that underpin image reconstruction, through automation of image labelling tasks, to inference of patient outcome and treatment choice. Moreover, medical imaging presents new problems that demand new machine learning algorithms – the world leading expertise in basic machine learning, computer vision, and medical imaging at UCL provides a unique opportunity to innovate at this interface. For example, our work on image quality transfer estimates high quality images from sparse or low quality data – it shows major enhancements in downstream processing and enabling future low-power portable imaging systems. Our work on multiview learning identifies new relationships among different data types, such as imaging and cognitive profiling. Medical imaging problems place new demands on supervised learning, e.g. to quantify uncertainty on output, operate on large images with low memory demands; on feature learning from sparsity of labels; on unsupervised learning to account for temporal changes in subgroups; and in reinforcement learning to account for the constraints and complexities of, for example, pulse sequence design or inherent biases in medical data sets. However, the solutions we are starting to pioneer in these areas offer major and long-lasting advances and impact. We are ideally placed to engineer general solutions to such challenges and demonstrate and exploit them across a wide range of application areas.

AI and machine learning lead: Prof Janaina Mourao-Miranda. Prof Mourao-Miranda is a leading expert in machine learning and its application to neuroimaging, holder of a Wellcome senior research fellowship with >50 publications (>5000 citations and H-index 31).

J headshot
Data Science and Health Informatics

Delivery on the aspiration of precision medicine, the learning healthcare system and interventional studies embedded in routine care is anchored in the capability to access and process routinely collected data within electronic health records (EHR) datasets at a regional, national and international level. To provide a step change in this delivery and create truly novel and actionable analytics, the creation of multimodal computational approaches combining medical imaging with EHR, and other emerging source of data (e.g. -omics) is essential. Our work on computational modeling and machine learning, including the emerging field of disease progression modelling, provides natural mechanisms to integrate different kinds of information, for example from MRI, to obtain fine-grained longitudinal patterns of disease and the heterogeneity of trajectories over the population. Such multi-modal analytical approaches will enable finer precision in patient staging and stratification, prediction of progression rates and earlier and better identification of atrisk individuals. Linked medical imaging data and large-scale EHR collections have immeasurable potential applications ranging from designing and implementing disease risk prediction models in clinic to stratifying patients and guiding recruitment for trials. Effective implementation demands propagation of uncertainty and ambiguity throughout the processing pipeline. There is a substantial and unmet need to train individuals to work with complex, diverse clinical data and develop novel computational approaches for linking them with imaging datasets.

Data science and health informatics lead: Dr Spiros Denaxas. Dr Denaxas is a leading expert in novel methods for EHR phenotyping (>2100 citations and H-index 30).

Spiros headhsot
Robotics and Sensing 

Robotic actuation can solve many of the challenges in achieving high quality imaging that is practically usable and clinically translatable and also to target, manipulate and augment tissue with precise actions while preserving healthy structures. Images guide surgical interventions, even more so in the era of minimally invasive surgery (MIS) where the common driver of modern interventions is to reduce trauma by accessing the internal anatomy either through natural orifices or through very small incisions. While MIS modalities such as laparoscopic US are already available, they are underutilised predominately due to poor workflow, cumbersome or non-existent multi-modal fusion, and a lack of 3D capabilities to interpret the images with respect to the endoscopic view. All of these can be addressed by developing the computational theory and technology for appropriate interfacing to the articulation provided by robotic instrumentation in order to link proprioception to the specific imaging sensor’s capabilities. Example opportunities include making robotic instruments that can adapt to real-time images and react to dynamic changes within the anatomy or targets structures identified in imaging, and developing both fundamental and computational imaging modalities that consider fast, robotic actuation as an available capability that could enhance the raw sensor signal either through motion compensation or through smart acquisition of multiple view angles to enhance resolution.

Robotics and sensing lead: Prof Danail Stoyanov. Prof Stoyanov is a leading expert in surgical vision and robotic surgery, holder of an EPSRC fellowship, with >50 publications (>2200 citations and H-index 25).

Dan Stoyanov
Human-Computer Interaction

With increasing computational power, novel algorithms that deliver new insights over increasingly large datasets, and greater democratisation of health management (with patients and families expected to make more informed decisions over their care), there are many exciting opportunities for transformative uses of medical imaging technologies. This includes being components of integrated, patient-centred diagnostic and therapeutic platforms as well as supporting research (e.g. drug development). However, to realise these benefits they need to be available to people with varying levels of specialist training in image interpretation, and their use has to fit seamlessly into workflow. Making such imaging and integrated systems usable and useful requires close collaborations between technology specialists, human factors and HCI researchers, and prospective users of those novel technologies, to ensure that there is a pipeline from advances in foundational technology leading to usable and useful practical applications in healthcare. These challenges can be addressed through HCI research at different levels of abstraction: investigating the perception and cognition behind image interpretation; identifying user requirements for novel imaging interfaces to support work, and prototyping and testing novel interfaces; studying the broader work system and how that work system needs to adapt to exploit the possibilities offered by new imaging technologies. User requirements revealed through interaction design may influence the imaging technology itself.

Human-computer interaction lead: Prof Ann Blandford. Prof Blandford is a world-renowned expert on HCI for digital health, with a broad portfolio of joint projects (co-led with clinicians) and >450 publications (>7400 citations and H-index 46).

Ann Blandford