Towards realtime multimodal fusion for image-guided interventions using self-similarities
Heinrich MP., Jenkinson M., Papiez BW., Brady SM., Schnabel JA.
Image-guided interventions often rely on deformable multi-modal registration to align pre-treatment and intra-operative scans. There are a number of requirements for automated image registration for this task, such as a robust similarity metric for scans of different modalities with different noise distributions and contrast, an efficient optimisation of the cost function to enable fast registration for this time-sensitive application, and an insensitive choice of registration parameters to avoid delays in practical clinical use. In this work, we build upon the concept of structural image representation for multi-modal similarity. Discriminative descriptors are densely extracted for the multi-modal scans based on the "self-similarity context". An efficient quantised representation is derived that enables very fast computation of point-wise distances between descriptors. A symmetric multi-scale discrete optimisation with diffusion regularisation is used to find smooth transformations. The method is evaluated for the registration of 3D ultrasound and MRI brain scans for neurosurgery and demonstrates a significantly reduced registration error (on average 2.1 mm) compared to commonly used similarity metrics and computation times of less than 30 seconds per 3D registration. © 2013 Springer-Verlag.