Autoencoders and variational autoencoders in medical image analysis

Jan Ehrhardt, Matthias Wilms


This chapter introduces two popular methods for unsupervised representation learning using neural networks, namely autoencoders and variational autoencoders. Both methods rely on a bottleneck encoder–decoder network architecture where the encoder maps an input to a low-dimensional latent space representation from which the decoder aims to reconstruct the input as accurately as possible. This latent space representation can also be used to systematically analyze or manipulate certain properties of the input data, which makes them a key tool for biomedical image synthesis tasks such as image reconstruction, data augmentation, or modality transfer. While autoencoders and variational autoencoders share the same general idea, they differ significantly in their theoretical foundations and abilities. Classical autoencoders are purely deterministic and their training usually solely focuses on minimizing the data reconstruction error. Variational autoencoders are deeply rooted in Bayesian statistics and aim to learn a rich probabilistic model to explain the data being analyzed. This chapter outlines the theoretical foundations of both methods, discusses their advantages and practical challenges, outlines some of their various extensions, presents selected example applications, and concludes with a concise discussion of directions for future research in this area.

Original languageEnglish
Title of host publicationBiomedical Image Synthesis and Simulation : Methods and Applications
Number of pages34
PublisherElsevier B.V.
Publication date2022
ISBN (Print)9780128243503
ISBN (Electronic)9780128243497
Publication statusPublished - 2022

Research Areas and Centers

  • Centers: Center for Artificial Intelligence Luebeck (ZKIL)


Dive into the research topics of 'Autoencoders and variational autoencoders in medical image analysis'. Together they form a unique fingerprint.

Cite this