Scholarly article on topic 'A Review of Quality Metrics for Fused Image'

A Review of Quality Metrics for Fused Image Academic research paper on "Medical engineering"

CC BY-NC-ND
0
0
Share paper
Academic journal
Aquatic Procedia
OECD Field of science
Keywords
{"Remote Sensing" / "Image Fusion" / Quantitative / Qualitative.}

Abstract of research paper on Medical engineering, author of scientific article — P. Jagalingam, Arkal Vittal Hegde

Abstract Image fusion is the process of combining high spatial resolution panchromatic (PAN) image and rich multispectral (MS) image into a single image. The fused single image obtained is known to be spatially and spectrally enhanced compared to the raw input images. In recent years, many image fusion techniques such as principal component analysis, intensity hue saturation, brovey transforms and multi-scale transforms, etc., have been proposed to fuse the PAN and MS images effectively. However, it is important to assess the quality of the fused image before using it for various applications of remote sensing. In order to evaluate the quality of the fused image, many researchers have proposed different quality metrics in terms of both qualitative and quantitative analyses. Qualitative analysis determines the performance of the fused image by visual comparison between the fused image and raw input images. On the other hand, quantitative analysis determines the performance of the fused image by two variants such as with reference image and without reference image. When the reference image is available, the performance of fused image is evaluated using the metrics such as root mean square error, mean bias, mutual information, etc. When the reference image is not available the performance of fused image is evaluated using the metrics such as standard deviation, entropy, etc. The paper reviews the various quality metrics available in the literature, for assessing the quality of fused image.

Academic research paper on topic "A Review of Quality Metrics for Fused Image"

Procedía

www.elsevier.com/locate/procedia

INTERNATIONAL CONFERENCE ON WATER RESOURCES, COASTAL AND OCEAN

ENGINEERING (ICWRCOE 2015)

A Review of Quality Metrics for Fused Image

Jagalingam Pa*, Arkal Vittal Hegdeb

a,bDepartment of Applied Mechanics and Hydraulics, National Institute of Technology, Surathkal, Karnataka - 575025, India

Abstract

Image fusion is the process of combining high spatial resolution panchromatic (PAN) image and rich multispectral (MS) image into a single image. The fused single image obtained is known to be spatially and spectrally enhanced compared to the raw input images. In recent years, many image fusion techniques such as principal component analysis, intensity hue saturation, brovey transforms and multi-scale transforms, etc., have been proposed to fuse the PAN and MS images effectively. However, it is important to assess the quality of the fused image before using it for various applications of remote sensing. In order to evaluate the quality of the fused image, many researchers have proposed different quality metrics in terms of both qualitative and quantitative analyses. Qualitative analysis determines the performance of the fused image by visual comparison between the fused image and raw input images. On the other hand, quantitative analysis determines the performance of the fused image by two variants such as with reference image and without reference image. When the reference image is available, the performance of fused image is evaluated using the metrics such as root mean square error, mean bias, mutual information, etc. When the reference image is not available the performance of fused image is evaluated using the metrics such as standard deviation, entropy, etc. The paper reviews the various quality metrics available in the literature, for assessing the quality of fused image. © 2015TheAuthors. Publishedby ElsevierB.V.This is an open access article under the CC BY-NC-ND license (http://creativecommons.Org/licenses/by-nc-nd/4.0/). Peer-review under responsibility of organizing committee of ICWRCOE 2015

CrossMark

Available online at www.sciencedirect.com

ScienceDirect

Aquatic Procedia 4 (2015) 133 - 142

Keywords: Remote Sensing; Image Fusion; Quantitative; Qualitative.

* Corresponding author. E-mail address:am13f05@nitkedu. in

2214-241X © 2015 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license

(http://creativecommons.Org/licenses/by-nc-nd/4.0/).

Peer-review under responsibility of organizing committee of ICWRCOE 2015

doi: 10.1016/j.aqpro.2015.02.019

1. Introduction

Remote sensing is the science and art of obtaining information about an object, area, or phenomenon through the analysis of data acquired by a device that is not in contact with the object, area, or phenomenon under investigation (Lillesand et al., 2004). To capture the image of earth, most optical earth observation satellites such as Landsat, Spot, IRS, Quikbird, GeoEye-1 and Worldview-2 carry two types of sensors such as panchromatic PAN and multispectral (MS). The PAN sensor provides PAN image at a high spatial resolution and MS sensor provides MS image at a high spectral resolution and low spatial resolution (Yun Zhang, 2004). Each optical earth observation satellite simultaneously collects both low resolution multispectral MS and high resolution panchromatic PAN images of the same area (Zhang and Mishra, 2013). Benefit of PAN image with high spatial resolution can accurately detect small or narrow features like roads, vehicles, etc. Advantage of high spectral resolution MS image allows the detection of slight spectral changes like vegetation stress, boundary lines, etc. (Ranchin and Wald, 2000). Many remote sensing application researchers need a single high spatial and high spectral resolution image for effective detection of feature, classification, etc. (Ranchin et al., 2003) . However, there is no remote sensing sensor designed to provide both high spatial and high spectral resolution in a single image due to the limitation of system tradeoffs and resolution tradeoffs. The system tradeoffs are due to the limitations of record volume (Nikolakopoulos, 2008).

The record size of a high resolution MS image is considerably larger than that of merged high resolution PAN image and low resolution MS image. This merged image can cause the problem of limited on-board memory size and data transmission rates from platform to land. The reason for resolution tradeoffs (Amro et al., 2011) is that all sensors are designed to deliver a specified signal-to-noise ratio. Reflected energy from the target needed to have a signal level large for the target to be observed by the sensor. The signal level of the reflected energy increases if the signal is collected over a larger instantaneous field of view (IFOV) or if it is collected over a broader spectral bandwidth. Collecting energy over a larger IFOV reduces the spatial resolution, while collecting it over a larger bandwidth reduces its spectral resolution. Thus, there is a tradeoff between the spatial and spectral resolutions of the sensor. The multispectral sensor records signals in narrow bands over a wide IFOV while the panchromatic sensor records signals over a narrow IFOV and over a broad range of the spectrum (Andreja, 2006).

Thus, the multispectral MS bands have a higher spectral resolution, but a lower spatial resolution compared to the PAN band, which has a higher spatial resolution and a lower spectral resolution. Looking at this limitation, researchers have developed image fusion techniques to merge both the PAN and MS images to provide both high spatial and spectral resolution in a single image. In the remote sensing literature, image fusion technique is known as PAN-sharpening because the information of PAN image is used to sharpen the MS image. From the late 1980s to present, remote sensing researchers have proposed different pan-sharpening techniques such as PCA (principal components analysis) (Kang et al., 2009), IHS (intensity, hue, and intensity) (Kim, 2011) and BT (brovey transform) (Pohl, 1999) etc. All this pan-sharpening techniques have their own advantages and limitations. The extensive usage of a pan - sharpening technique, causes an increase in the importance of assessing the quality of fused image before using them for different Remote Sensing applications (Jawak and Luis, 2013) . In order to evaluate the performance of the fused image, researchers have proposed various quality metrics such as qualitative analysis and quantitative analysis. The paper reviews the different quality metrics available in the literature for assessing the quality of fused.

2. Methods for image fusion quality assessment

Assessing the quality of fused image is important before using them for several remote sensing applications. The quality of fused image is important, because Pan-sharpening technique injects the information from high spatial resolution into high spectral resolution image to obtain single high spatial and spectral resolution image (Du et al., 2007). The main theme of Pan-sharpening algorithm is to preserve the relevant information that present in the input images and reduce distortion in the fused image. During the translation of data from one image to the other, fusion algorithm may introduce spatial distortion and spectral distortion which affect the quality of fused image. To easily understand the best integration of spatial and spectral information in the fused image one has to perform different quality metrics. Nevertheless, to obtain a good quality of fused image some general requirements are to be looked at such as 1) PAN and MS images obtained should be at nearby date, 2) the spectral range of PAN image should cover

the spectral range of all MS bands used for the fusion process, 3) PAN and MS images are to be registered perfectly, 4) selection of fusion algorithm should be able to preserve the relevant information from the input images and algorithm must not introduce any distortions and artifacts, 5) the fusion algorithm should be efficient and strong to mitigate the imperfections such as mis-registration (Fonseca et al., 2011).

3. Qualitative analysis

One way to evaluate the fused image is qualitative analysis i.e (visual analysis). In this analysis, a team of observers compare the fused image with the input images and value the quality of the fused image by employing various optical parameters such as spatial details, geometric pattern, size of objects, color, etc. The advantage of visual analysis is simple, soft and most honest means to examine the quality of fused image, however it depends largely on the experience of the observers and viewing conditions (Wald et al., 1997a). If the viewer has knowledge of ground truth, he may evaluate the quality of fused image more precisely. This method doesn't depend on the numerical models, and their technique is visual interpretation. The viewer can grade the quality of the fused image based on absolute and relative measure (Table 1).

TableLQualitative method for image quality assessment (Fonseca et al., 2011)

Grade Absolute measure Relative measure

1 Excellent the best in group

2 Good Better than the average level in group

3 Fair Average level in group

4 Poor Lower than the average level

5 Very poor the lowest in the group

4. Quantitative analysis

This method is based on mathematical modelling and it is well known as an objective analysis. It measures the quality of the fused image by taking the set of pre-defined quality indicators for evaluating the spectral and spatial similarities between the fused image and raw input images (Yun Zhang, 2008). Quantitative analysis is adopted by two approaches, namely with reference image and without reference image.

4.1 With reference image

When the reference image is available, the following quality metrics such as root mean square error (RMSE), spectral angular mapper (SAM), relative dimensionless global error (ERGAS), mean bias (MB), percentage fit error (PFE), signal to noise ratio (SNR), peak signal to noise ratio (PSNR), correlation coefficient (CC), mutual information (MI), universal quality index (UQI), structural similarity index measure (SSIM), etc. are used to evaluate the quality of the fused image. Reference image is the MS image at the resolution of PAN image (Amro et al., 2011). In reality availability of ideal reference image is difficult. But duplicate reference image can be obtained by decomposing the input MS image resolution into the fused image resolution. However, using a decomposed image as an artificial reference image may provide unexpected results. (Table 2) Metrics for performance evaluation of fused when reference image is available.

4.2. Without reference image

When the reference image is not available, the quality of the fused image is evaluated by using the following quality metrics such as standard deviation (a), entropy (He), cross entropy (CE), spatial frequency (SF), fusion mutual information (FMI), fusion quality index (FQI), fusion similarity metric (FSM), etc. (Table 3) Metrics for performance evaluation of fused image when reference image is not available.

Table2. Metrics for performance evaluation when reference image is available

S. Quality Description No metrics

1 RMSE It is commonly used to compare the difference between the

reference and fused images by directly computing the variation in pixel values. The combined image is close to the reference image when RMSE value is zero. RMSE is a good indicator of the spectral quality of fused image.

2 SAM It computes the spectral angle between the pixel, vector of the

reference image and fused image. It is worked out in either degrees or radians. It is performed on a pixel-by-pixel base. A value of SAM equal to zero denotes the absence of spectral distortion.

ERGAS It is used to compute the quality of fused image in terms of normalized average error of each band of processed image. Increase in the value of ERGAS indicates distortion in the fused image, lower value of ERGAS indicates that the fused image is similar to the reference image.

MB Mean Bias is the difference between the mean of the reference

image and fused image. The ideal value is zero and indicates that the reference image and fused image are similar. Mean value refers to the grey level of pixels in an image.

PFE It computes the norm of the difference between the

corresponding pixels of the reference and fused image to the norm of the reference image. When the calculated value is zero, it indicates that both the reference and fused images are similar and value will be increased when the merged image is not similar to the reference image.

SNR SNR is used to measure the ratio between information and

noise of the fused image. Higher value indicates that both the reference and fused images are similar.

7 PSNR It is widely used metric it is computed by the number of gray levels in the image divided by the corresponding pixels in the reference and the fused images. When the value is high, the fused and reference images are similar. A higher value indicates superior fusion.

Formula

What value to Look for Reference Best

Fusion(Higher/Lower)

RMSE=J— A{ S (/r(;,/)-//(;, /))' \MNi=l.H

Lower (Close to Zero) (Zoran, 2009)

SAM I V, V = arccos

1Mb ■ V

ERGAS =100— dl

( ? N RA1SE'

Lower (equal to Zero) (Alparone et al., 2007)

Lower value

(Du et al., 2007)

MB = ■

^mean fuse^mean

norm fir- If)

PEE = --*100

norm (Ir)

Lower (equal to Zero) (Yusuf et al., 2013)

Lower (equal to Zero) (Naidu, 2010)

o. a' -t,

SNR= WlogjQ

PSNR = 201 ogj q

( M N . . \

s s M'../))-

1=1J=1

s s L (t.j)-iAtj f i=i i=i

hm z s;(irM-//kj)Y

Higher value

Higher value

(Alimuddin et al., 2012)

(Naidu, 2010)

8 CC It is used to compute the similarity of spectral features

between the reference and fused images. The value of CC should be close to +1, which indicates the reference and fused images are same. Variation increases when the value of CC is less than 1.

9 MI It is used to measure the similarity of image intensity between

the fused and reference images. Higher value of MI indicates the better image quality.

10 UQI It is used to calculate the amount of transformation of relevant

data from reference image into fused image. The range of this metric is -1 to 1. The value 1 indicates that the reference and fused images are similar

11 SSIM SSIM is used to compare the local patterns of pixel intensities

between the reference and fused images. The range varies between -1 to 1.The value 1 indicates the reference and fused images are similar.

12 QI QI is used to model a any distortion as a combination of three

different factors: loss of correlation, luminance distortions, and contrast distortion. The range of QI is -1 to 1. The value 1 indicates reference and fused images are similar

Higher Value (Close to +1) (Zhu and Bamler, 2013)

Cr + C .

MI = * hlrl.

'■> J §2

hlrlf ) ^

r (w hf (w )

Higher Value

(Xiao-lin et al. 2010)

4a IrIf(PIr

f )(P2Ir ■

(2P Ir Pf + Ci)(2a irif

2 2 2 (p Ir + p If + Ci)(a Ir + a

-If + C2)

Higher Value (Close to +1) (Alparone et al., 2008)

Higher Value (Close to +1) (Wang et al., 2004)

Higher value(Close to +1) (Wang and Bovik, 2002)

r + (y )2

Table3. Metrics for performance evaluation when reference image is not available

Quality metrics

Description

Formula

What value to Look for Reference Higher/Lower_

Standard deviation is used to measure the contrast in the fused image. When the value of (cr) is high, it indicates the fused image as high contrast.

Entropy is used to measure the information content of a fused image. The high entropy value indicates the fused image as rich information content.

He = -jt0hIf (i)'°§2hIf (• )

Higher

Higher

(Wang and Chang, 2011)

(Wang and Chang, 2011)

3 CE Cross entropy is used to compute the similarity in information

contained between the input images and fused image. Low value of cross entropy indicates the input images and fused image containing the same information

4 SF Spatial frequency is computed by calculating the row

frequency and column frequency of fused image. Higher value of SF indicates the input images and fused image are similar.

5 FMI Fusion mutual information is used to compute the degree of

dependency between the input images and fused image. A larger value of FMI indicates a better quality of the fused image.

6 FQI Fusion quality index is used to compute the quality index of

fused image, the range of this metric is 0 to l.The value one indicates the fused image contains all the information from the input images.

7 FSM FSM is used to calculate the spatial similarity between the

input and fused images. The range of this metric is 0 to l.The value l indicates that the fused image contains all the information from the input images.

CE(Ip,Im;If) =

CE(I p; I f) + CE\I

(Im^If )

--^RF2 + CF2

FMI = MI

FQI = £

Âw)QI[I

(l - !.{w))Ql(i

Higher

Higher

Higher (Close to 1)

Higher (Close to 1)

(Bagher et al., 2011)

(Yang et al., 2010)

(Zeng et al.2006)

(Paella & Heijmans 2003)

(Xiao-lin et al.2010)

im (/p, -Q1 if m

' Qi(vif|w)-

(Im■If\w)

8 WFQI WFQI measures the amount of salient information of fused

image which is transferred from the source images. Higher value of WFQI indicates that the fused image as good quality.

9 EFQI EFQI measures the edge information of fused image (Jiang

and Wang, 2014). The greater the value of EFQI indicates that the fused image as good quality.

10 D It is often used to compute the degree of distortion of fused

image. The lower value of D indicates that the fused image as good quality.

, J x{w )UIQI (.X,F/w) + WFQHX.Y.F) = X C{W )l V ' V '

wss - X[w)jUiQi(y, F/w

EFQI (x, Y,F )= WFQI (x, Y, F ).WFQi (x ; Y , Z 'J*

l M N -

MN ? SK

Higher

Higher

(Li et al., 2011)

(Jiang and Wang, 2014) (Zhu and Bamler, 2013)

Pi &■

(Due to page restrictions, the references provided in table 2&3, may be used for details regarding the variables in the equations)

5. Application of metrics to validate the performance of fused image:

(Zhang, 2008) used four testing images: 1) Ikonos-orig, 2) Ikonos-Shift, 3) Ikonos-Str, 4) Ikonos-Str Shift. Images 2, 3, 4 are obtained through altering the original Ikonos image (Ikonos-orig) by Mean shifting (Ikonos-shift), histogram stretching (Ikonos-Str), histogram stretching plus mean shifting (Ikonos-Str string). To assess the quality of four testing images author used various methods such as visual comparison, classification and quantitative indicators. Visual comparison: In this method, the author demonstrated visual comparison by stretching all the four images into zero stretching, linear stretching, root stretching, adaptive stretching and equalization stretching and displayed all the stretched images under same visual conditions and compared with each other. And, it is proved that all the images are similar in all the stretching conditions, so it is determined that the four images have the best quality under visual interpretation. Nevertheless, to show that all the four images have good tone, author also performed by comparing the individual bands of all four testing images through overplaying the same band of different images into one image under similar stretching condition but displayed using different colors. If there is a difference between the similar bands of images, then the overlaid band image will appear in color where the differences occur. He determined that no color appears in any of the images which proved all the four testing images as good quality.

Classification: He used an ISODATA tool to classify the Ikonos-orig, Ikonos-Shift, Ikonos-Str and Ikonos-Str Shift images into 16 clusters (each image into 4 clusters) and the clusters of Ikonos-Shift, Ikonos-Str and Ikonos-Str Shift images are compared by overlaying with Ikonos-orig image and proved that all the clustering results appear the same, which means no colour variations were found in any of the images and hence proved that all the four testing images as good quality Quantitative evaluation: He applied seven quantitative indices (which are also used in IEEE GRSS 2006 Data fusion contest) such as Mean bais (MB), variance difference (VD) (Wald et al., 1997), Standard Deviation (SD), Correlation Coefficient (CC), Spectral Angle Mapper (SAM),Relative Dimensionless Global Error (ERGAS), and Quality Index (Q4) to the three testing images (Ikonos-Shift, Ikonos-Str and Ikonos-Str Shift) and used Ikonos-orig as reference image. And it is found that only the CC value indicates all the four images are similar, four indices (MB, SAM, ERGAS and Q4) values indicates that the three testing images are not similar to the reference image and two indices (VD and SD) values indicates that Ikonos-Shift image is similar to reference image whereas, Ikonos-Str and Ikonos-Str Shift images are not similar to the reference image. The results of quantitative indices were not satisfied when compared to the results of visual comparison and classification.

(Wang and Chang, 2011) Proposed laplacian pyramid (LP) method to fuse the two different sets of images (clock and Metallurgical) and the same set of images were also fused by using average, maximum and wavelet transform methods to compare with the proposed method. To evaluate the quality of fused image they used both qualitative and quantitative evaluation. Qualitative evaluation: They visually compared all the fused images of clocks and Metallurgical, which is obtained from proposed, average, maximum and wavelet, transform methods and concluded that the proposed method was outperforming the other. Quantitative evaluation: They used three indices Standard deviation (SD), Entropy (He), and Average gradient (AG) to evaluate the quality of fused images. The value of all three indices indicates that the proposed method shows good results when compare to the results of other methods.

(Zhu and Bamler, 2013) Proposed SparseFI method and compared with the most popular methods such as IHS, adaptive IHS method, PCA method, and Brovey method. To perform the experiments they used UltraCam data and WorldView-2 data. Experimental results using UltraCam: To fuse the UltraCam image they simulated the original PAN image to a Model error of 2.7%, and downsampled original MS image by the factor of 10. They performed both Quantitative and Qualitative evaluation to check the quality of the fused image obtained from the proposed method and popular methods. Qualitative evaluation: They used the original MS image to visually compare the results produced from the image fusion methods and it is significant that SparseFI method outperforming the popular methods. Quantitative evaluation: RMSE, CC, SAM, UIQI, ERGAS, AG, degree of distortion (D) are used to assess the quality of fused image. Most of the metric's value indicates that the SparseFI method provides best results compared to the other methods. In addition, UltraCam original PAN image is simulated to a model error of 25% and

fused with the downsampled MS image. They used the same quantitative evaluation indices (mentioned above) to check the quality of the fused image obtained by the proposed and other methods. The indices value reveals that the proposed method outperforming the other method. Experimental results using WorldView-2 data: They used both proposed and popular method to fuse the PAN image with a spatial resolution of 0.5m and MS image with a spatial resolution of 2m to obtain 0.5m resolution of MS image. Quality of fused images is assessed by using Quantitative and Qualitative evaluation. Qualitative evaluation: In order to have a reference image for visual comparison, they downsampled the input PAN image 0.5m to 2m resolution and the MS image 2m into 8m resolution and finally fused image 0.5m to 2m resolution. All the decomposed images are displayed in the same visual condition and compared with the original MS image and results reveal that the proposed method provides better quality image than the other methods. Quantitative evaluation: They performed all the above mentioned quality indices to assess the quality of fused images. The value of all indices indicates that the proposed method outperforms the other methods.

(Zhang and Mishra, 2013) They used 9 different pan-sharpening algorithms namely High pass filter, IHS, PCA, Brovey, Subtractive Fusion, Wavelet, ESRI pan-sharpening, Gram Schmidt, UNB PanSharp (Fuze GoTM) for fusing the PAN and MS images of IKONOS, QuickBird, GeoEye-1, and WorldView-2. More than 36 Pan-sharped images are generated by using the above 9 algorithms. Due do the space limitation, they compared only the results of images which is obtained from using the UNB PanSharp (Fuze GoTM) and the Gram Schmidt algorithm. To assess the quality of fused image they used Qualitative evaluation: They displayed all the input images and fused images under same visual conditions and histogram stretching was applied to all the images. Based on Spatial and Spectral detail of fused images it is concluded that the Fuze GoTM algorithm produces better pan-sharpening image than the other algorithms. They have not demonstrated Quantitative evaluation to assess the quality of fused images, because they believed that the existing quality metrics cannot provide convincing evaluation results.

(Jiang & Wang, 2013) Proposed Morphological Component Analysis (MCA) method and compared with the LP, Morphological Pyramid (MP), Gradient Pyramid (GP), Curvelet Transform (CVT), Dual-tree complex method (DWT), and Non-subsampled Contourlet Transform (NSCT) for the fusion of two Pairs of multi focus images, two pairs of infrared and visible images, and two pairs of medical images. Quantitative and Qualitative evaluation method is used to assess the quality of fused image. Qualitative evaluation: The fused images of multi focus 1 & 2, infrared and visible images 1&2, and Medical images 1&2, are displayed under the same visual condition and compared with the source images. For the fused image of Multi focus-1 they observed, the MP method generates additional information in the fused image which is not present in the source images, CVT and NSCT fused images are brighter and exceeds the intensity of the source images. They concluded that fused images of LP, GP, DWT, and MCA methods are similar to the source images. For the fused image of multi focus - 2 they observed all the fused images are equal except the MP fused image become brighter and they felt it was difficult for the human eye to find the difference. For the fused medical image -1 they observed CVT and NSCT fused images were worst. They concluded the fused images of LP, MP, GP, DWT and MCA are having similar visual effects.

For the medical image - 2, they noted fused image of LP, MP and DWT provides good results than the GP, CVT, NSCT and MCA method. For the Infrared and visible images - 1 they witnessed fused images of CVT and NSCT contain more detailed information than the other methods and noticed all other method is similar in terms of transferring the information from the source image. For the Infrared and visible images - 2, it is clear that fused images of the CVT and NSCT methods are darker than the other fused images and their intensity values are less than the source images. And all the other methods LP, MP, GP, DWT and MCA fused images are similar. As a whole Qualitative evaluation indicates that MCA method produced better fused image. Quantitative evaluation: For the purpose of quantitative evaluation they used four indices such as SSIM, UIQI, WFQI, and EFQI. For the fused image of a Multi-focus-1, the value of UIQI, WFQI and EFQI indicates the MCA method provides better results, whereas the value of SSIM Indicates that GP method is best. And it is also observed that the value of LP, GP, DWT and MCA methods are close to each other. For the fused image of a Multi-focus-2, the value of UIQI, WFQI and EFQI indicates the MCA method provides better results, whereas the value of SSIM Indicates that NSCT method is best. For the fused medical image -1 & 2, the value of UIQI, WFQI and EFQI indicates the MCA method provides better results, whereas the value of SSIM Indicates that LP method is best. For the Infrared and visible images - 1,

the value of UIQI shows LP method is best, the value of WFQI shows the MCA method is best, the value of EFQI indicates NSCT method is best and the value of SSIM indicates GP method is best. They observed that the MCA method as one best value, two second- best value, and one 4th best value. For the Infrared and visible images - 2, the value of UIQI, WFQI and EFQI indicates the MCA method provides best result and SSIM value shows GP method is best. Finally, they concluded that the MCA method was superior to other methods.

6. Conclusion

Increasing usage of fused image, needed to be evaluated before using it for Remote Sensing applications. This paper explores extensively the literature of evaluation techniques (qualitative and quantitative) to validate the quality of fused image. For Qualitative (or) subjective evaluation: if a visual comparison is not conducted under the same visual condition, then the comparison will not provide reliable results and it also depends on the experience of the viewer. Furthermore, they are time consuming and expensive. For Quantitative (or) Objective evaluation: From the literature, there are several reference image based and Non-reference image based, quality metrics are available to evaluate the performance of the fused image. Quality metrics of referenced based image could be used if a genuine ideal image was available, in reality it is very difficult to construct. Different evaluation results are obtained when different non-reference image based quality metrics are adopted. However, currently there are no promising Quantitative metrics in the literature for the best evaluation of fused image. Therefore available quality metrics for measuring the image fusion quality and for measuring the quality difference between two images is still an open problem. Future researchers can base their studies to develop an effective quantitative method to provide consistent and convincing results.

7. References

Alimuddin, I., Sumantyo, J.T.S., Kuze, H., 2012. Assessment of pan-sharpening methods applied to image fusion of remotely sensed multi-band

data. Int. J. Appl. Earth Obs. Geoinf. 18, 165-175. doi:10.1016/j.jag.2012.01.013 Alparone, L., Aiazzi, B., Baronti, S., Garzelli, A., Nencini, F., Selva, M., 2008. Multispectral and Panchromatic Data Fusion Assessment Without

Reference. Photogramm. Eng. Remote Sens. 74, 193-200. doi:0099-1112/08/7402-0000/$3.00/0 Alparone, L., Wald, L., Chanussot, J., Member, S., Thomas, C., Gamba, P., Bruce, L.M., 2007. Comparison of Pansharpening Algorithms :

Outcome of the 2006 GRS-S Data-Fusion Contest. IEEE Trans. Geosci. Remote SENSINGI 45, 3012-3021. Amro, I., Mateos, J., Vega, M., Molina, R., Katsaggelos, A.K., 2011. A survey of classical methods and new trends in pansharpening of

multispectral images. EURASIP J. Adv. Signal Process. 2011, 79. doi:10.1186/1687-6180-2011-79 Andreja, K., 2006. High-resolution Image Fusion : Methods to Preserve Spectral and Spatial Resolution. Photogramm. Eng. Remote Sens. 72, 565-572.

Bagher, M., Haghighat, A., Aghagolzadeh, A., Seyedarabi, H., 2011. A non-reference image fusion metric based on mutual information. Comput.

Electr. Eng. 37, 744-756. doi:10.1016/j.compeleceng.2011.07.012 Du, Q., Younan, N.H., King, R., Shah, V.P., 2007. On the performance evaluation of pan-sharpening techniques. IEEE Geosci. Remote Sens. Lett. 4, 518-522.

Fonseca, L., Namikawa, L., Castejon, E., 2011. Image Fusion for Remote Sensing Applications. InTech.

Jawak, S.D., Luis, A.J., 2013. A Comprehensive Evaluation of PAN-Sharpening Algorithms Coupled with Resampling Methods for Image

Synthesis of Very High Resolution Remotely Sensed Satellite Data. Adv. Remote Sens. 2013, 332-344. Jiang, Y., Wang, M., 2014. Image fusion with morphological component analysis. Inf. Fusion 18, 107-118. doi:10.1016/j.inffus.2013.06.001 Kang, T., Zhang, X., Wang, H., 2009. Assessment of the fused image of multispectral and panchromatic images of SPOT5 in the investigation of

geological hazards. Sci. China Ser. E Technol. Sci. 51, 144-153. doi:10.1007/s11431-008-6015-0 Kim, Y., 2011. Generalized IHS-Based Satellite Imagery Fusion Using Spectral Response Functions. ETRI J. 33, 497-505. doi:10.4218/etrij.11.1610.0042

Li, S., Yang, B., Hu, J., 2011. Performance comparison of different multi-resolution transforms for image fusion. Inf. Fusion 12, 74-84. doi:10.1016/j.inffus.2010.03.002

Lillesand, T.M., Kiefer, R.W., Chipman, J.W., 2004. Remote sensing and image interpretation, Nev York Chichester Brisbane Toronto 6IS s. Naidu, V.P.S., 2010. Discrete Cosine Transform-based Image Fusion. Def. Sci. J. 60, 48-54.

Nikolakopoulos, K.G, 2008. Comparison of Nine Fusion Techniques for Very High Resolution Data. Photogramm. Eng. Remote Sens. 74, 647659.

Pohl, C., 1999. Tools And Methods for Fusion of Images of Different Spatial Resolution. Photogramm. Eng. Remote Sens. 32, 3-4.

Ranchin, T., Aiazzi, B., Alparone, L., Baronti, S., Wald, L., 2003. Image Fusion - The ARSIS concept and some successful implementation

schemes. ISPRS J. Photogramm. Remote Sens. 58, 4-18. Ranchin, T., Wald, L., 2000. Fusion of High Spatial And Spectral Resolution Images: The Arsis Concept And Its Implementation. Photogramm. Eng. Remote Sens. 66, 49-61.

Wald, L., Ranchin, T., Mangolini, M., 1997a. Fusion of satellite images of different spatial resolutions. 1. Photogramm. Eng. Remote Sens. 63,

691-699.

Wald, L., Ranchin, T., Mangolini, M., 1997b. Fusion of satellite images of different spatial resolutions: assessing the quality of resulting images.

Photogramm. Eng. Remote Sensing... Eng. Remote ... 63, 691-699. Wang, W., Chang, F., 2011. A Multi-focus Image Fusion Method Based on Laplacian Pyramid. J. Comput. 6, 2559-2566.

doi:10.4304/jcp.6.12.2559-2566 Wang, Z. and A.C.B., 2002. A universal image quality index. IEEE singal Process. Lett. XX, 2-5.

Wang, Z., Bovik, A.C., Sheikh, H.R., Member, S., Simoncelli, E.P., Member, S., 2004. Image Quality Assessment : From Error Visibility to

Structural Similarity. IEEE Trans. IMAGE Process. 13, 1-14. Xiao-lin. Zhang, LIU. Zhi-fang, KOU. Young, DAI .Jin-bo, C.Z., 2010. Quality Assessment of Image Fusion Based on Image Content and

Structural Similarity. IEEE Proc. 1, 1-4. Yang, B., Jing, Z., Zhao, H., 2010. Review of pixel-level image fusion. J. Shanghai Jiaotong Univ. 15, 6-12. doi:10.1007/s12204-010-7186-y Yun Zhang, 2004. Understanding Image Fusion. Photogramm. Eng. Remote Sens. 657-661.

Yusuf, Y., Sri Sumantyo, J.T., Kuze, H., 2013. Spectral information analysis of image fusion data for remote sensing applications. Geocarto Int.

28, 291-310. doi:10.1080/10106049.2012.692396 Zeng.J, Sayedelahl.A, Gilmore.T, C.., 2006. Review of Image Fusion Algorithms for Unconstrained Outdoor Scenes, in: ICSP2006. pp. 0-3. Zhang, Y., 2008. - A Review , Comparison And Analysis. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. XXXVII, 1101-1109. Zhang, Y., Mishra, R.K., 2013. From UNB PanSharp to Fuze Go - the success behind the pan-sharpening algorithm. Int. J. Image Data Fusion 5,

39-53. doi:10.1080/19479832.2013.848475 Zhu, X.X., Bamler, R., 2013. Application to Pan-Sharpening. IEEE Trans. Geosci. Remote Sens. 51, 2827-2836. Zoran, L.F., 2009. Quality Evaluation of Multiresolution Remote Sensing Image Fusion. U.P.B. Sci. Bull., Ser. C 71, 38-52.