Deep Multi-Modal Encoder-Decoder Networks for Shape Constrained Segmentation and Joint Representation Learning

Nassim Bouteldja, Dorit Merhof, Jan Ehrhardt, Mattias P. Heinrich

Abstract

Deep learning approaches have been very successful in segmenting cardiac structures from CT and MR volumes. Despite continuous progress, automated segmentation of these structures remains challenging due to highly complex regional characteristics (e.g. homogeneous gray-level transitions) and large anatomical shape variability. To cope with these challenges, the incorporation of shape priors into neural networks for robust segmentation is an important area of current research. We propose a novel approach that leverages shared information across imaging modalities and shape segmentations within a unified multi-modal encoder-decoder network. This jointly end-to-end trainable architecture is advantageous in improving robustness due to strong shape constraints and enables further applications due to smooth transitions in the learned shape space. Despite no skip connections are used and all shape information is encoded in a low-dimensional representation, our approach achieves high-accuracy segmentation and consistent shape interpolation results on the multi-modal whole heart segmentation dataset.

Original languageEnglish
Title of host publicationBildverarbeitung für die Medizin 2019
Number of pages6
Place of PublicationLübeck
PublisherSpringer Vieweg, Wiesbaden
Publication date2019
Pages23-28
ISBN (Print)978-3-658-25325-7
ISBN (Electronic)978-3-658-25326-4
DOIs
Publication statusPublished - 2019
EventWorkshop on Bildverarbeitung fur die Medizin 2019 - Lübeck, Germany
Duration: 17.03.201919.03.2019
Conference number: 224899

Fingerprint

Dive into the research topics of 'Deep Multi-Modal Encoder-Decoder Networks for Shape Constrained Segmentation and Joint Representation Learning'. Together they form a unique fingerprint.

Cite this