Available online at www.sciencedirect.com
ScienceDirect
Procedía Engineering 7 (2010) 280-285
Procedía Engineering
www.elsevier.com/locate/procedia
2010 Symposium on Security Detection and Information Processing
Multimodal medical image fusion based on IHS and PCA
Changtao Hea*, Quanxi Liub*, Hongliang Lia, Haixu Wangb
aUniversity of Electronic Science and Technology of China, Chengdu 611731, China bSouthwest institute of technical physics, Chengdu 610054, China
Abstract
The fusion of multimodal brain imaging for a given clinical application is a very important performance. Generally, the PET image indicates the brain function and has a low spatial resolution, the MRI image shows the brain tissue anatomy and contains no functional information. Hence, a perfect fused image should contain both more functional information and more spatial characteristics with no spatial and color distortion. There have been a number of approaches proposed for fusing multitask or multimodal image information. But, every approach has its limited domain for a particular application. Study indicated that intensity-hue-saturation (IHS) transform and principal component analysis (PCA) can preserve more spatial feature and more required functional information with no color distortion. The presented algorithm integrates the advantages of both IHS and PCA fusion methods to improve the fused image quality. Visual and quantitative analysis show that the proposed algorithm significantly improves the fusion quality; compared to fusion methods including PCA, Brovey, discrete wavelet transform (DWT).
© 2010 Published by Elsevier Ltd.
Keywoeds: PET image; MRI image; image fusion; HIS; PCA ;
1. Introduction
In recent years, medical imaging has been widely applied into the clinical therapy and treatment, an increasing number of image modalities becomes available. Generally speaking, medical imaging is divided into structural and functional systems. Structural image such as MRI (magnetic resonance imaging) and CT (computed tomography) provide high-resolution images with anatomical information; functional image such as PET (positron emission tomography) and SPECT (single-photon emission computed tomography) provide functional information with low spatial resolution [1]. A single image can't fundamentally satisfy clinical needs, so, combining anatomical and functional images provide much more useful information is very necessary.
* Corresponding author. Tel: +86-028-85164092 E-mail address: hect2009@qq.com
1877-7058 © 2010 Published by Elsevier Ltd. doi:10.1016/j.proeng.2010.11.045
Medical image fusion is the process of integrating information from two or more modal images of the same position into a single image that is more information and appropriate for visual perception or required clinical application. For the most clinical application, the purpose of image fusion is to reduce ambiguity and minimize redundancy in the final fused image while maximizing the relative information specific to an application [2].
In this paper, the PET images are shown in pseudo-color, and the MRI images are gray. To satisfy actual requirements, the fused images must be distinct and without false details and other redundancy. For this purpose, an IHS and PCA integrated fusion approach has been proposed in this study. The single IHS fusion technique is that the low resolution intensity component of PET image in IHS space is replaced by a gray-level image MRI with higher spatial resolution [3]. However, the fusion result based on IHS transform is mostly undesirable and lose a lot of detailed information of original PET image. In this study, we integrate IHS transform and PCA method to obtain a satisfied fusion result which includes both plentiful structure information and higher spatial resolution, at the same time, it can also minimize redundancy.
2. The IHS and PCA fusion models
2.1 The RGB-IHS conversion model
The IHS transform converts a multispectral image or panchromatic image with red, green and blue channels (RGB) to intensity, hue and saturation independent components. The intensity displays the brightness in a spectrum, the hue is the property of the spectral wavelength, and the saturation is the purity of the spectrum [1]. This technique may be used for the fusion of multi-sensor images [4].
To understand the whole fusion process preferably. We must review the RGB-IHS conversion model. There are two essential but important RGB-IHS conversion models. The first conversion system is a linear transformation:
Ä /VJ 0
H = tan-11
Where Vi and V2 are the transitional values in the above equations, the inverse transform is:
zs xb -/a -/s
Moreover, triangular spectral mode is another type of commonly used IHS transform, it can be expressed as: R + G + B
I--H--H--H--
3I - 3B B - R 3I - 3R R - G
if B < R, G
+1, S =-, if R < B, G
(5a) (5b) (5c)
+ 2, S =-, if G < R, B (5d)
3I - 3G I
The corresponding inverse IHS transform as follows: R = I(1 + 2S - 3SH), G = I(1 - S + 3SH), B = I(1 - S), if B < R, G (6a) R = I(1 - S), G = I(1 + 5S - 3SH), B = I(1 - 4S + 3SH), if R < B, G (6b)
R = /(1 - 7S + 3SH), G = /(1 - S), B = /(1 + 8S - 3SH), if G < R, B (6c)
In this proposed approach, we exploit the IHS triangular model. The whole IHS fusion process can lead to a fused and enhanced spectral image.
2.2 PCA transform fusion approach
The concept and approach of this way is described in detail in references [5-6], the fundamentals of PCA fusion as follows:
First, the multispectral image is transformed with PCA transform and the eigenvalues and corresponding eigenvectors of correlation matrix between images in the multi-spectral image's individual bands are worked out to obtain each matrix's principle components [7].
Second, the panchromatic image is matched by the first principle component using histogram method. Finally, the first principle component of the multispectral image is replaced with the matched panchromatic image and with other principle components, is transformed with inverse PCA transform to form the fused image.
3. The proposed model
The proposed approach synthesizes their advantages of IHS and PCA transforms, and takes full use of their features of preserving a lot of spatial structure information. However, due to IHS transform leads spectral distortion, there is a low correlation between the PET intensity image and fused image. The PCA transform extracts their own principal components of PET intensity image and MRI image, and selects their components' weighted coefficients by calculating spatial frequency respectively to obtain a new intensity component. Finally, we can obtain a satisfied fused image by inverse IHS transform exploiting the new intensity component and original H and S components of PET image. This fusion process generates a new high resolution color image. The new image contains both the spatial detail of the MRI source image, and the color detail of the PET source image, simultaneously.
According to Fig.1, firstly, the PET image is transformed into the IHS triangular model components. Secondly, histogram matching is applied to match the histogram of the MRI image with the PET intensity component. The thirdly is performed by extracting the principle components of the new MRI (called New Pan) and original PET intensity component, using PCA method. Finally, this process is completed by inverse IHS transform of the new intensity and the old hue and saturation components back into RGB space.
New intensity
Red band
Green band
Blue band
Multi-spectral image (PET)
Triangular IHS model
Intensity
Saturation
Inverse IHS
Red band
Green band
Blue band Fused image
Fig.1 Diagram of the proposed IHS and PCA fusion integrated approach
How to select two principal components' weighted coefficients after PCA transform is a critical question. In this paper, we propose a novel adaptive selection method by calculating spatial frequency (SF) of original PET intensity image and old MRI image. Li et al. used SF to measure the clarity of image blocks [8], and it was introduced by Eskicioglu and Fisher [9]. The expression for a K X L pixels image f (x, y) is defined as:
SF = yj (RF )2 + (CF )2 (7)
Where RF and CF are the row frequency
RF = H [f(x, y) - f (x, y -1)]2 (8)
V K x Lx-1 y-2
and column frequency
CF H [f(x, y) - f (x -1, y)]2 (9)
V K x L y-1 x-2
respectively. The selection of two principal components' weighted coefficients based on SF can be depicted as: I = aI1 (10)
a + p = 1 (11)
Where I1, 12 represent the principal component after PCA, respectively. a and ¡5 are normalized SF values.
4. Experimental results and discussions
The test data consist of color PET images and high resolution MRI images. The spatial resolution of MRI images and PET images are 256 X 256 and 128 X 128 pixels. The color PET images were registered to the corresponding MRI images. All images have been downloaded from the Harvard university site (http://www.med.harvard. edu/AANLIB/home.html).the original images and fusion results are displayed in Fig.2-4.
Fig.2. Alzheimer's disease PET and MRI images (a and b), Brovey (c), PCA (d), DWT (e), the proposed method (f).
Fig.3. Alzheimer's disease PET and MRI images (a and b), Brovey (c), PCA (d), DWT (e), the proposed method (f).
(d) (e) (f)
Fig.4. Alzheimer's disease PET and MRI images (a and b), Brovey (c), PCA (d), DWT (e), the proposed method (f).
It can be observed that the proposed method have fairly good spectrum and better spatial information integration (Fig.2-4). The integration method combines the advantages of both IHS transform and PCA transform. Extracting the principal component minimizes the redundancy and self-adaptive weighted coefficients' selection makes up the spectrum distortion by IHS transform. Apparently, the results from the proposed method appear the best among all the results visually. In this paper, we use mutual information (MI) to evaluate the proposed method's effectiveness, it has been applied in many areas including, information fusion, and image registration. Relative detailed description can be seen in literature [10], comparing result is shown in table 1.
Table 1. The fusion methods performance measure based on mutual information (MI).
Fusion methods
MI (Fig.2)
MI (Fig.3)
MI(Fig.4)
Brovey PCA DWT Proposed method
0.6432 2.3654 2.6534 2.9546
0.6465 2.4693 2.7624 2.9344
0.6248 2.4012 2.8098 2.9586
5. Conclusions
A novel medical image fusion method was proposed, we assume PET are shown in pseudo-color. The PET produces images with suitable color and low spatial resolution, while MRI provides appropriate spatial resolution with no color information content. In this study, we integrated the merits both preserving spatial information of the IHS transform and minimizing redundancy of PCA transform, and obtained the satisfied fused images. Compared to the other fusion methods including Brovey transform, PCA, DWT. it is clear that the proposed method is the best in both human visualization and objective evaluation criterion.
[1] Sabalan Daneshvar, Hassan Ghassemian. MRI and PET image fusion by combining IHS and retina-inspired models. Information Fusion 2010; 11:114-123.
[2] Goshtasby A, Nikolov S. Image fusion: advances in the state of the art. Information Fusion 2007; 8(2): 114-118.
[3] Te-Ming Tu, Shun-Chi Su, Hsuen-Chyun Shyu. A new look at IHS-like image fusion methods. Information Fusion 2001; 2: 177-186.
[4] Pohl C, Van Genderen JL. Multisensor image fusion in remote sensing: concepts, methods and applications. J. Remote sensing 1998; 19: 823854.
[5] Yesou H, Besnus Y, Rolet Y. Extraction of spectral information from Landsat TM data and merger with SPOT panchromatic imagery-A contribution to the study of Geological structures. ISPRS Journal of Photogrammetry and Remote Sensing 1993; 48(5): 23-36.
[6] Ehlers M. Multisensor image fusion techniques in remote sensing. ISPRS Journal of Photogrammetry and Remote Sensing 1991; 46: 1930.
[7] Wen Cao, Bicheng Li, Yong Zhang. A remote sensing image fusion method based on PCA transform and wavelet packet transform. IEEE Int. Conf. Neural Networks 8 Signal Processing 2003; 976-980 Nanjing.
[8] LI S, Wang JT. Combination of images with diverse focuses using the spatial frequency. Information Fusion 2001; 2: 169-176.
[9] Eskicioglu AM, Fisher PS. Image quality measures and their performance. IEEE Transactions on Communication 1995; 43(12): 2959-2965.
[10] Piella G. A general framework for multiresolution image fusion: from pixels to regions. Information Fusion 2003; 4(4): 259-280.
References