Unsupervised learning of multimodal image registration using domain adaptation with projected Earth Move's discrepancies

Mattias P. Heinrich, Lasse Hansen

Abstract

Multimodal image registration is a very challenging problem for deep learning approaches. Most current work focuses on either supervised learning that requires labelled training scans and may yield models that bias towards annotated structures or unsupervised approaches that are based on hand-crafted similarity metrics and may therefore not outperform their classical non-trained counterparts. We believe that unsupervised domain adaptation can be beneficial in overcoming the current limitations for multimodal registration, where good metrics are hard to define. Domain adaptation has so far been mainly limited to classification problems. We propose the first use of unsupervised domain adaptation for discrete multimodal registration. Based on a source domain for which quantised displacement labels are available as supervision, we transfer the output distribution of the network to better resemble the target domain (other modality) using classifier discrepancies. To improve upon the sliced Wasserstein metric for 2D histograms, we present a novel approximation that projects predictions into 1D and computes the L1 distance of their cumulative sums. Our proof-of-concept demonstrates the applicability of domain transfer from mono- to multimodal (multi-contrast) 2D registration of canine MRI scans and improves the registration accuracy from 33% (using sliced Wasserstein) to 44%.
Original languageEnglish
Title of host publicationInternational Conference on Medical Imaging with Deep Learning
Publication date01.07.2020
DOIs
Publication statusPublished - 01.07.2020

Research Areas and Centers

  • Centers: Center for Artificial Intelligence Luebeck (ZKIL)

Fingerprint

Dive into the research topics of 'Unsupervised learning of multimodal image registration using domain adaptation with projected Earth Move's discrepancies'. Together they form a unique fingerprint.

Cite this