Yukun Zhou & Harry Lin - CMIC/WEISS Joint Seminar Series
13 October 2021, 1:00 pm–2:00 pm
Yukun Zhou & Harry Lin - a talk as part of CMIC/WEISS Joint Seminar Series
Event Information
Open to
- All
Availability
- Yes
Organiser
-
UCL Centre for Medical Image Computing and Wellcome/EPSRC Centre for Interventional and Surgical Sciences
Speaker: Yukun Zhou
Title: Learning to Address Intra-segment Misclassification in Retinal Imaging
Abstract: Accurate multi-class segmentation is a long-standing challenge in medical imaging, especially in scenarios where classes share strong similarity. Segmenting retinal blood vessels in retinal photographs is one such scenario, in which arteries and veins need to be identified and differentiated from each other and from the background. Intra-segment misclassification, i.e. veins classified as arteries or vice versa, frequently occurs when arteries and veins intersect, whereas in binary retinal vessel segmentation, error rates are much lower. We thus propose a new approach that decomposes multi-class segmentation into multiple binary, followed by a binary-to-multi-class fusion network. The network merges representations of artery, vein, and multi-class feature maps, each of which are supervised by expert vessel annotation in adversarial training. This strategy helps alleviate the intra-segment misclassification. We are now working on deploying the algorithm into real-world clinical.
Speaker: Harry Lin
Title: Generalised Super Resolution for Quantitative MRI Using Self-Supervised Mixture of Experts
Abstract: Multi-modal and multi-contrast imaging datasets have diverse voxel-wise intensities. For example, quantitative MRI acquisition protocols are designed specifically to yield multiple images with widely-varying contrast that inform models relating MR signals to tissue characteristics. The large variance across images in such data prevents the use of standard normalisation techniques, making super resolution highly challenging. We propose a novel self-supervised mixture-of-experts (SS-MoE) paradigm for deep neural networks, and hence present a method enabling improved super resolution of data where image intensities are diverse and have large variance. Unlike the conventional MoE that automatically aggregates expert results for each input, we explicitly assign an input to the corresponding expert based on the predictive pseudo error labels in a self-supervised fashion. A new gater module is trained to discriminate the error levels of inputs estimated by Multiscale Quantile Segmentation. We show that our new paradigm reduces the error and improves the robustness when super resolving combined diffusion-relaxometry MRI data from the Super MUDI dataset. Our approach is suitable for a wide range of quantitative MRI techniques, and multi-contrast or multi-modal imaging techniques in general. It could be applied to super resolve images with inadequate resolution, or reduce the scanning time needed to acquire images of the required resolution.
Chair: Danny Alexander