Towards realtime multimodal fusion for image-guided interventions using self-similarities.

Mattias Paul Heinrich*, Mark Jenkinson, Bartlomiej W. Papiez, Sir Michael Brady, Julia A. Schnabel

*Corresponding author for this work
25 Citations (Scopus)


Image-guided interventions often rely on deformable multimodal registration to align pre-treatment and intra-operative scans. There are a number of requirements for automated image registration for this task, such as a robust similarity metric for scans of different modalities with different noise distributions and contrast, an efficient optimisation of the cost function to enable fast registration for this time-sensitive application, and an insensitive choice of registration parameters to avoid delays in practical clinical use. In this work, we build upon the concept of structural image representation for multi-modal similarity. Discriminative descriptors are densely extracted for the multi-modal scans based on the "self-similarity context". An efficient quantised representation is derived that enables very fast computation of point-wise distances between descriptors. A symmetric multi-scale discrete optimisation with diffusion reguIarisation is used to find smooth transformations. The method is evaluated for the registration of 3D ultrasound and MRI brain scans for neurosurgery and demonstrates a significantly reduced registration error (on average 2.1 mm) compared to commonly used similarity metrics and computation times of less than 30 seconds per 3D registration.

Original languageEnglish
JournalMedical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention
Issue numberPt 1
Pages (from-to)187-194
Number of pages8
Publication statusPublished - 01.12.2013


Dive into the research topics of 'Towards realtime multimodal fusion for image-guided interventions using self-similarities.'. Together they form a unique fingerprint.

Cite this