XClose

UCL Centre for Medical Image Computing

Home
Menu

Prof. Razvan Marinescu & Prof. Leilani H. Gilpin - CMIC/WEISS + AI Centre Joint Seminar

07 September 2022, 1:00 pm–3:00 pm

Prof. Razvan Marinescu & Prof. Leilani H. Gilpin - CMIC/WEISS + AI Centre Joint Seminar

Event Information

Open to

All

Availability

Yes

Organiser

UCL Centre for Medical Image Computing and Wellcome/EPSRC Centre for Interventional and Surgical Sciences

Speaker: Prof. Razvan Marinescu - Assistant Professor, Department of Computer Science and Engineering, UC Santa Cruz


Title:  Building Bayesian priors over the manifold of medical images


Abstract

Powerful image priors that capture the image manifold are important for many applications such as denoising, image editing, as well as for medical image reconstruction and disease progression modelling. We present a method of learning Bayesian priors through deep generative models called BRGM, and show that its performance on super-resolution and inpainting is on par with state-of-the-art methods. By leveraging pre-trained generative models such as StyleGAN, our model doesn't require any additional task-specific training. In addition, we can estimate a posterior distribution over the space of potential solutions, thus accounting for ill-posed image reconstruction problems. To make this technology applicable to the medical domain, we further show an extension of StyleGAN to 3D images, using native 3D convolutions, achieved through significant GPU memory optimizations. We will conclude with a discussion on the future of generative modelling in the medical domain. 

Bio:

Razvan Marinescu is an Assistant Professor in the Department of Computer Science and Engineering at UC Santa Cruz. His research is in Machine Learning for Healthcare, with particular focus on neuroimaging analysis. He is also a co-founder of GiwoTech Inc, a research-focused start-up working on molecular dynamics simulations for viruses.  He received his PhD from UCL, advised by Daniel Alexander, Sebastian Crutch and Neil Oxtoby, and was previously a postdoc at MIT working with Poland Golland. 
 


Speaker: Leilani H. Gilpin -  Assistant Professor, Department of Computer Science and Engineering ,UC Santa Cruz.  


Title:  Accountability layers: Stress-testing explainable AI for safety-critical systems


Abstract

Autonomous systems are prone to errors and failures without knowing why.  In critical domains like driving, these autonomous counterparts must be able to recount their actions for safety, liability, and trust. An explanation: a model-dependent reason or justification for the decision of the autonomous agent being assessed, is a key component for post-mortem failure analysis, but also for pre-deployment verification.  I will show a monitoring framework that uses a model and commonsense knowledge to detect and explain unreasonable vehicle scenarios, even if it has not seen that error before.

In the second part of the talk, I will motivate the explanations as a testing framework for autonomous systems.  While it is important to develop realistic tests in simulation, simulation is not always representative of the corner cases in the real world.  I will show how to use explanations in a feedback loop.  The explanation ensures that the machine has done the right thing or it exploits a stressor to be modified and tested moving forward.  I will conclude by discussing new challenges at the intersection of XAI and autonomy towards autonomous systems that are explainable by design.
 



Chair: Danny Alexander