UCL Centre for Medical Image Computing


Introduction to NiftyNet

07 June 2017, 1:00 pm–2:00 pm

Event Information

Open to



UCL Bloomsbury - Roberts 106 Roberts Building.

Presenter: Wenqi Li

Title: On the Compactness, Efficiency, and Representation of 3D Convolutional Networks: Brain Parcellation as a Pretext Task

Abstract: Deep convolutional neural networks are powerful tools for learning visual representations from images. However, designing efficient deep architectures to analyse volumetric medical images remains challenging. This work investigates efficient and flexible elements of modern convolutional networks such as dilated convolution and residual connection. With these essential building blocks, we propose a high-resolution, compact convolutional network for volumetric image segmentation. To illustrate its efficiency of learning 3D representation from large-scale im- age data, the proposed network is validated with the challenging task of parcellating 155 neuroanatomical structures from brain MR images. Our experiments show that the proposed network architecture compares favourably with state-of-the-art volumetric segmentation networks while being an order of magnitude more compact. We consider the brain parcellation task as a pretext task for volumetric image segmentation; our trained network potentially provides a good starting point for transfer learning. Additionally, we show the feasibility of voxel-level uncertainty estimation using a sampling approximation through dropout.

Presenter: Lucas Fidon

Title: Scalable multimodal convolutional networks for brain tumour segmentation

Abstract: Brain tumour segmentation plays a key role in computer- assisted surgery. Deep neural networks have increased the accuracy of automatic segmentation significantly, however these models tend to generalise poorly to different imaging modalities than those for which they have been designed, thereby limiting their applications. For example, a network architecture initially designed for brain parcellation of monomodal T1 MRI can not be easily translated into an efficient multimodal net- work that jointly utilises T1, T1c, Flair and T2 MRI for brain tumour segmentation. To tackle this problem, we propose a novel scalable multimodal deep learning architecture that uses new nested structures that explicitly leverage deep features within or across modalities. This aims at making the early layers of the architecture structured and sparse so that the final architecture becomes scalable to the number of modalities. We evaluate the performance of scalable architecture for brain tumour segmentation and give evidence of its regularisation effect compare to the conventional concatenation approach.

Presenter: NiftyNet team

Title: NiftyNet - An open-source library for convolutional networks in medical image analysis

Abstract: NiftyNet is about to be launched and is looking for people to contribute to its core development team.