Abstract
Deformable image registration is still a challenge when the considered images have strong variations in appearance and large initial misalignment. A huge performance gap currently remains for fast-moving regions in videos or strong deformations of natural objects. We present a new semantically guided and two-step deep deformation network that is particularly well suited for the estimation of large deformations. We combine a U-Net architecture that is weakly supervised with segmentation information to extract semantically meaningful features with multiple stages of nonrigid spatial transformer networks parameterized with low-dimensional B-spline deformations. Combining alignment loss and semantic loss functions together with a regularization penalty to obtain smooth and plausible deformations, we achieve superior results in terms of alignment quality compared to previous approaches that have only considered a label-driven alignment loss. Our network model advances the state of the art for inter-subject face part alignment and motion tracking in medical cardiac magnetic resonance imaging (MRI) sequences in comparison to the FlowNet and Label-Reg, two recent deep-learning registration frameworks. The models are compact, very fast in inference, and demonstrate clear potential for a variety of challenging tracking and/or alignment tasks in computer vision and medical image analysis.
| Originalsprache | Englisch |
|---|---|
| Aufsatznummer | 1392 |
| Zeitschrift | Sensors (Switzerland) |
| Jahrgang | 20 |
| Ausgabenummer | 5 |
| ISSN | 1424-8220 |
| DOIs | |
| Publikationsstatus | Veröffentlicht - 01.03.2020 |
Fördermittel
Funding: This research was funded by German research funding organization (DFG) grant number HE7364/1-2.
UN SDGs
Dieser Output leistet einen Beitrag zu folgendem(n) Ziel(en) für nachhaltige Entwicklung
-
SDG 3 – Gesundheit und Wohlergehen
-
SDG 9 – Industrie, Innovation und Infrastruktur
Fingerprint
Untersuchen Sie die Forschungsthemen von „Semantically guided large deformation estimation with deep networks“. Zusammen bilden sie einen einzigartigen Fingerprint.Zitieren
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver