XClose

UCL Centre for Medical Image Computing

Home
Menu

Joao Ramalhinho & Simone Foti - CMIC/WEISS Joint Seminar Series

09 February 2022, 1:00 pm–2:00 pm

Joao Ramalhinho & Simone Foti- talks as part of CMIC/WEISS Joint Seminar Series

Event Information

Open to

All

Availability

Yes

Organiser

UCL Centre for Medical Image Computing and Wellcome/EPSRC Centre for Interventional and Surgical Sciences

Speaker: Joao Ramalhinho

TitleDeep Hashing for Global Registration of 2D Untracked Laparoscopic Ultrasound to CT

Abstract

The registration of Laparoscopic Ultrasound (LUS) to CT can enhance the safety of laparoscopic liver surgery by providing the surgeon with awareness on the relative positioning between critical vessels and a tumour. In an effort to provide a translatable solution for this poorly constrained problem, we have been developing a vessel-based Content-based Image Retrieval (CBIR) framework for obtaining a global coarse registration without using tracking information. Instead of optimising an alignment, we pre-simulate a set of possible registration solutions in CT, encode each into a feature representation, and then compare with the feature representation of an input LUS image. The closest matches are taken as possible registration solutions, and Bayesian model is used to estimate the most likely sequence of CT simulated poses to represent the LUS acquisition. In this talk, I will present our latest adaptation to this pipeline, where a Deep Hashing (DH) model is used to extract feature representations from segmented blood vessel images, both in LUS and CT.  Our initial results show that compared to hand-crafted feature vectors, the representations learnt by DH lead to more robust registration results. This talk will provide insights on how DH models can be applied to a registration problem.



Speaker: Simone Foti

Title3D Shape Variational Autoencoder Latent Disentanglement via Mini-Batch Feature Swapping

Abstract

Learning a disentangled, interpretable, and structured latent representation in 3D generative models is still an open problem. Experimental results conducted on 3D meshes show that state-of-the-art methods for latent disentanglement are not able to disentangle identity features of faces and bodies. In this talk, I will be presenting our recent self-supervised approach to train a 3D shape variational autoencoder (VAE) and encourage a disentangled latent representation of identity features. Curating the mini-batch generation by swapping arbitrary shape features across different shapes allows to define a loss function leveraging known differences and similarities in the latent representations. Our proposed method properly decouples the generation of such features while maintaining good representation and reconstruction capabilities.

 


Chair: Matt Clarkson