Simultaneous Segmentation and Motion Estimation in 4D-CT Data Using a Variational Approach

Jan Ehrhardt, Alexander Schmidt-Richberg, Heinz Handels


Spatiotemporal image data sets, like 4D CT or dynamic MRI, open up the possibility to estimate respiratory induced tumor and organ motion and to generate four-dimensional models that describe the temporal change in position and shape of structures of interest. However, two main problems arise; the structures of interest have to be segmented in the 4D data set and and the organ motion has to be estimated in the temporal image sequence. This paper presents a variational approach for simultaneous segmentation and registration applied to temporal image sequences. The proposed method assumes a known segmentation in one frame and then recovers nonlinear registration and segmentation in other frames by minimizing a cost function that combines intensity-based registration, level-set segmentation as well as prior shape and intensity knowledge. The purpose of the presented method is to estimate respiration induced organ motion in spatiotemporal CT image sequences; and to segment, a structure of interest simultaneously. A validation of the combined registration and segmentation approach is presented using low dose 4D CT data sets of the liver. The results demonstrate that the simultaneous solution of both problems improves the segmentation performance over a. sequential application of the registration and segmentation steps.

Original languageEnglish
Title of host publicationMedical Imaging 2008: Image Processing
Number of pages10
Publication date19.05.2008
Pages691437-1 - 691437-10
ISBN (Print)978-081947098-0
Publication statusPublished - 19.05.2008
EventMedical Imaging 2008 - Visualization, Image-Guided Procedures, and Modeling - San Diego, United States
Duration: 16.02.200821.02.2008
Conference number: 72207


Dive into the research topics of 'Simultaneous Segmentation and Motion Estimation in 4D-CT Data Using a Variational Approach'. Together they form a unique fingerprint.

Cite this