Zero-Learning Fast Medical Image Fusion

Clinical applications, such as image-guided surgery and noninvasive diagnosis, rely heavily on multi-modal images. Medical image fusion plays a central role by integrating information from multiple sources into a single, more understandable output. We propose a real-time image fusion method using pretrained neural networks to generate a single image containing features from multi-modal sources. The images are merged using a novel strategy based on deep feature maps extracted from a convolutional neural network. These feature maps are compared to generate fusion weights that drive the multi-modal image fusion process. Our method is not limited to the fusion of two images, it can be applied to any number of input sources. We validate the effectiveness of our proposed method on multiple medical fusion categories. The experimental results demonstrate that our technique achieves state-of-the-art performance in both visual quality, objective assessment, and runtime efficiency.

Presented at:
22nd International Conference on Information Fusion (FUSION 2019), Ottawa, Canada, July 2-5, 2019

Note: The status of this file is: Anyone

 Record created 2019-09-02, last modified 2020-10-25

Download fulltext

Rate this document:

Rate this document:
(Not yet reviewed)