Training CNNs for Image Registration from Few Samples with Model-based Data Augmentation

Abstract

Convolutional neural networks (CNNs) have been successfully used for fast and accurate estimation of dense correspondences between images in computer vision applications. However, much of their success is based on the availability of large training datasets with dense ground truth correspondences, which are only rarely available in medical applications. In this paper, we, therefore, address the problem of CNNs learning from few training data for medical image registration. Our contributions are threefold: (1) We present a novel approach for learning highly expressive appearance models from few training samples, (2) we show that this approach can be used to synthesize huge amounts of realistic ground truth training data for CNN-based medical image registration, and (3) we adapt the FlowNet architecture for CNN-based optical flow estimation to the medical image registration problem. This pipeline is applied to two medical data sets with less than 40 training images. We show that CNNs learned from the proposed generative model outperform those trained on random deformations or displacement fields estimated via classical image registration.
Original languageEnglish
Title of host publicationMedical Image Computing and Computer Assisted Intervention − MICCAI 2017 : 20th International Conference, Quebec City, QC, Canada, September 11-13, 2017, Proceedings, Part I
EditorsMaxime Descoteaux, Lena Maier-Hein, Alfred Franz, Pierre Jannin, D. Louis Collins, Simon Duchesne
Number of pages9
PublisherSpringer International Publishing
Publication date04.09.2017
Pages223 - 231
ISBN (Print)978-3-319-66182-7
ISBN (Electronic)978-3-319-66181-0
DOIs
Publication statusPublished - 04.09.2017
Event20th International Conference on Medical Image Computing and Computer-Assisted Intervention
- Quebec, Canada
Duration: 11.09.201713.09.2017
Conference number: 197559

Fingerprint

Dive into the research topics of 'Training CNNs for Image Registration from Few Samples with Model-based Data Augmentation'. Together they form a unique fingerprint.

Cite this