Memory-efficient GAN-based domain translation of high resolution 3D medical images

Hristina Uzunova*, Jan Ehrhardt, Heinz Handels

*Korrespondierende/r Autor/-in für diese Arbeit

Abstract

Generative adversarial networks (GANs) are currently rarely applied on 3D medical images of large size, due to their immense computational demand. The present work proposes a multi-scale patch-based GAN approach for establishing unpaired domain translation by generating 3D medical image volumes of high resolution in a memory-efficient way. The key idea to enable memory-efficient image generation is to first generate a low-resolution version of the image followed by the generation of patches of constant sizes but successively growing resolutions. To avoid patch artifacts and incorporate global information, the patch generation is conditioned on patches from previous resolution scales. Those multi-scale GANs are trained to generate realistically looking images from image sketches in order to perform an unpaired domain translation. This allows to preserve the topology of the test data and generate the appearance of the training domain data. The evaluation of the domain translation scenarios is performed on brain MRIs of size 155 × 240 × 240 and thorax CTs of size up to 5123. Compared to common patch-based approaches, the multi-resolution scheme enables better image quality and prevents patch artifacts. Also, it ensures constant GPU memory demand independent from the image size, allowing for the generation of arbitrarily large images.

OriginalspracheEnglisch
Aufsatznummer101801
ZeitschriftComputerized Medical Imaging and Graphics
Jahrgang86
ISSN0895-6111
DOIs
PublikationsstatusVeröffentlicht - 12.2020

Fingerprint

Untersuchen Sie die Forschungsthemen von „Memory-efficient GAN-based domain translation of high resolution 3D medical images“. Zusammen bilden sie einen einzigartigen Fingerprint.

Zitieren