Semantically guided large deformation estimation with deep networks

In Young Ha*, Matthias Wilms, Mattias Heinrich

*Corresponding author for this work
2 Citations (Scopus)

Abstract

Deformable image registration is still a challenge when the considered images have strong variations in appearance and large initial misalignment. A huge performance gap currently remains for fast-moving regions in videos or strong deformations of natural objects. We present a new semantically guided and two-step deep deformation network that is particularly well suited for the estimation of large deformations. We combine a U-Net architecture that is weakly supervised with segmentation information to extract semantically meaningful features with multiple stages of nonrigid spatial transformer networks parameterized with low-dimensional B-spline deformations. Combining alignment loss and semantic loss functions together with a regularization penalty to obtain smooth and plausible deformations, we achieve superior results in terms of alignment quality compared to previous approaches that have only considered a label-driven alignment loss. Our network model advances the state of the art for inter-subject face part alignment and motion tracking in medical cardiac magnetic resonance imaging (MRI) sequences in comparison to the FlowNet and Label-Reg, two recent deep-learning registration frameworks. The models are compact, very fast in inference, and demonstrate clear potential for a variety of challenging tracking and/or alignment tasks in computer vision and medical image analysis.

Original languageEnglish
Article number1392
JournalSensors (Switzerland)
Volume20
Issue number5
ISSN1424-8220
DOIs
Publication statusPublished - 01.03.2020

Fingerprint

Dive into the research topics of 'Semantically guided large deformation estimation with deep networks'. Together they form a unique fingerprint.

Cite this