Model-based sparse-to-dense image registration for realtime respiratory motion estimation in image-guided interventions

In Young Ha*, Matthias Wilms, Heinz Handels, Mattias P. Heinrich

*Corresponding author for this work
1 Citation (Scopus)

Abstract

Objective: Intra-interventional respiratory motion estimation is becoming a vital component in modern radiation therapy delivery or high intensity focused ultrasound systems. The treatment quality could tremendously benefit from more accurate dose delivery using real-time motion tracking based on magnetic-resonance (MR) or ultrasound (US) imaging techniques. However, current practice often relies on indirect measurements of external breathing indicators, which has an inherently limited accuracy. In this work, we present a new approach that is applicable to challenging real-time capable imaging modalities like MR-Linac scanners and 3D-US by employing contrast-invariant feature descriptors. Methods: We combine GPU-accelerated image-based realtime tracking of sparsely distributed feature points and a dense patient-specific motion-model for regularisation and sparse-to-dense interpolation within a unified optimization framework. Results: We achieve highly accurate motion predictions with landmark errors of 1 mm for MRI (and 2 mm for US) and substantial improvements over classical template tracking strategies. Conclusion: Our technique can model physiological respiratory motion more realistically and deals particularly well with the sliding of lungs against the rib cage. Significance: Our model-based sparse-to-dense image registration approach allows for accurate and realtime respiratory motion tracking in image-guided interventions.
Original languageEnglish
Article number8360027
JournalIEEE Transactions on Biomedical Engineering
Volume66
Issue number2
Pages (from-to)302-310
Number of pages9
ISSN0018-9294
DOIs
Publication statusPublished - 01.02.2019

Fingerprint

Dive into the research topics of 'Model-based sparse-to-dense image registration for realtime respiratory motion estimation in image-guided interventions'. Together they form a unique fingerprint.

Cite this