CrossMark
Available online at www.sciencedirect.com
ScienceDirect
Procedía Computer Science 54 (2015) 713 - 720
Eleventh International Multi-Conference on Information Processing-2015 (IMCIP-2015)
Fusion of Multi-Sensor Satellite Images using Non-Subsampled
Contourlet Transform
P. Mangalraj * and Anupam Agrawal
Indian Institute of Information Technology, India
Abstract
The presented research work proposes fusion of multi-sensor satellite images using non subsampled contourlet transform. In the proposed work, trade-off between the spectral distortion and enhancement of spatial information is witnessed while fusing two multi-sensor images. The ills of wavelet based fusion techniques such as limited directionality, lack of phase information and shift invariant are addressed with the help of Non subsampled contourlet transform. The Non subsampled contourlet helps to retain the intrinsic structural information while decomposing and reconstructing the image components. Decision based rules are applied for component substitution for fusion. The experiments are carried out against the current state of art and observed that the proposed system provides promising results visually and quantitatively. The efficiency of the proposed system in the fused product is analysed qualitatively by Isodata classification algorithm.
© 2015 TheAuthors.Publishedby ElsevierB.V.This is an open access article under the CC BY-NC-ND license (http://creativecommons.Org/licenses/by-nc-nd/4.0/).
Peer-reviewunder responsibilityof organizing committee of the Eleventh International Multi-Conference on Information Processing-2015 (IMCIP-2015)
Keywords: Fusion; Intrinsic structural information; Multi-sensor Images; Non subsampled contourlet transform; Spectral distortion.
1. Introduction
Geographical scenes obtained through a particular sensor may not contain the desired information of the particular scene. The pattern recognition will gets fail when desired outcome is needed1. Images from different sensors having different resolution and different viewing angles. In most of the applications image requires to be rich in both spectral and spatial quality. It is observed that there is always trade-off between spatial, spectral and radiometric resolution, hence user must continuously make efforts to maintain trade-off between these three resolutions. To obtain more information, two or more images are merged together to form super resolution images. Process of integrating supplementary and complementary information from different images referred as fusion1. The fused image should produce the major supplementary information along with the complementary information. The fusion of remote sensing imageries is an active area in the field of remote sensing. The fusion is an application oriented task in context with the resolutions and quality. Possible combinations of fusion is as follows only passive images, only active images and the combination of passive and active imageries. Passive images are the one which is obtained through sensing the natural illumination of any object or area with the help of external source. In passive imageries, climate and clouds are
* Corresponding author. Tel.: +91-9794172048. E-mail address: mangal86@gmail.com
1877-0509 © 2015 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.Org/licenses/by-nc-nd/4.0/).
Peer-review under responsibility of organizing committee of the Eleventh International Multi-Conference on Information Processing-2015 (IMCIP-2015) doi: 10.1016/j.procs.2015.06.084
considered as the hindrances in pattern recognition2. Active imagery system uses its own source to capture the content of object or area. Active imagery systems is having the capability of acquiring images at any time and any weather conditions. The satellites which uses such kind of sensor to gather the information about the earth is called as "All Time All Weather" Satellites. Such active images can be captured using Radar signals called as Synthetic Aperture Radar (SAR) images. Speckle noise is present in SAR imageries, an inherent noise produced due to the phenomenon of active imagery systems .The reflected signals captured through the sensors have the path difference 1/4. The path difference gives a rise to the captured object with the granular noise called Speckle3. This noise is multiplicative in nature. The speckle noise deteriorates the quality of acquired image and makes the interpretation of image difficult. It is always recommend to perform despeckling before using active images for fusion. The fusion related works are provided in the later section. The paper is organised as follows; the related work is provided in the section 2, a brief introduction about the non-subsampled contourlet transform is provided at section 3, the proposed system is provided at the section 4, Experiment, results and discussion is provided in section 5.
2. Related Work
Most of the investigators exercised the fusion of images using different methods. The conventional methods have followed the pixel based fusion techniques, which lacks in spectral and radiometric information. Some of the Pixel conventional methods such as PCA (Principal Component Analysis) based4'5 Color Normalised transform6 and HSI/HSV (Hue, Saturation and Intensity/Value) transform based7. The fusion algorithm should maintains a trade-off by minimizing the spectral information and enhance the spatial information. Most of the algorithms failed to maintain the aforesaid trade-off which primarily have to be witnessed8. Some of the investigators adaptively regularize the parameters to maintain the trade-off between spatial and spectral information9. These methods are computationally high.
The object based fusion techniques came to existence to overcome the problem of pixel based drawback. The investigators used multi-resolution techniques for fusing with the help of wavelet transforms10.
The Multi-resolution based techniques usually carried out as follows
1. Extraction of high frequency components from the High Resolution Image (Ex: Panchromatic Images)
2. Substitution of high frequency components into the low resolution Image (Ex: Multispectral Images)
The general object based fusion technique is based on spatial and frequency decomposition of images using Multi-Resolution Analysis (MRA) and component substitution using rules based systems11
These techniques provides comparatively good results when compared to the conventional pixel based. The wavelet based techniques holds the spectral and spatial information to a good extent, where the pixel based techniques failed12. Discrete wavelet transform (DWT) is one of the widely used tool for image decomposition for fusion. The wavelet transform based fusion schemes such as maximum selection (MS) uses the maximum coefficient from each band with high magnitude13.
The wavelet is having a severe problem while reconstructing the decomposed components and due to blurring operations14. Such ills leads to the presence of artifacts in the fused products15. DWT provides directionality in limited directions such as horizontal, vertical and diagonal16. The wavelet based fusion technique fails to retain the intrinsic geometrical information due to lack of directionality in decomposing the images. The DWT is sensitive towards the shift and lack of phase information. These drawbacks are witnessed while using wavelets for fusion13.
In the proposed work the Multi-Geometric Analysis (MGA) is achieved with the help of non-subsampled contourlet (NSCT) transform, which overrules the drawback of wavelets. The MGA helps to retain the intrinsic geometrical information in an image while decomposing for processing. The MGA provides the full directionality, shift invariant and also provides the phase information. The representation which obeys the MGA is the contourlets. Constructing filter banks for contourlets is tedious and computationally complex one17. Sampling problem is identified while using contourlets and to avoid the drawbacks of contourlets non-subsampled contourlets is introduced18. To utilize the full advantages of MGA in fusing multi-sensor images, Non-Subsampled transform (NSCT) is used in the proposed system. The later section provides the brief discussion of NSCT with the properties.
(a) NSDFB Structure (b) Jdealiied Frequency Partitioning
Fig. 1. NSCT overall structure.
3. Non-Subsampled Contourlet Transform
The NSCT divides the two dimensional signal into multiple components which are shift invariant. The 2D signal is decomposed into different levels of decomposition called as the NSPS (Non Subsampled Pyramid Structure). The NSDFB (Non Subsampled Directional Filter Bank) is applied at the high frequency component to obtain directional components as shown in Fig. 1a. The filter bank which splits the frequency plane in 2D is pictured in Fig. 1b. The NSPS provides multi-scale properties and NSDFB provides the directionality.
3.1 NSCT Representation
The representation of NSCT comprises the following properties
1. Multi-Resolution
2. Multi-Direction
3. Shift Invariant
4. Regularity
5. Redundancy ( J + 1) J is number of levels
3.2 Multi resolution analysis
V ijeZ œ
where Z indicates real number. Provides sequence of multi resolution nested subspaces as given below
... V-2 C V-1 C Vo C V C V2 ... (2)
where Vj is associated with uniform grid of 2j * 2j.
Difference images in subspace is Wj and the orthogonal difference live the subspace that is orthogonally complement to Vj in Vj -1.
Vj -1 = Vj e Wj (3)
3.3 Multi-directional analysis
Equation 4 shows the resultant, when filter bank is applied to approximation subspace Vj.
^ = 0 j (4) k=0
where k = 0,1, 2,...,2l — 1 which indicates the total no of wedges. Wedges represents the directional elements. The multi scaling of the NSCT is achieved as follows
L2(R2) = 0 Wj (5)
Where Wj is not Shift Variant.
3.4 Shift invariant
To make the representation shift Invariant, lifting theorem is applied to filters as follows
ffiH9=-M1D 0 * (J Q1G) (6)
where f (z) is a 2D function and P and Q having same complexity.
3.5 Regularity
= n°=0H0(2-j)m is obtained by scaling the detail of the low pass approximation and the regularity is controlled by a low pass filter.
/1 + ej ®i \ /1 4. ej ®2 \ H0(co) = N1 N2 (7)
4. Proposed Methodology
Proposed system is implemented as 3 Phases as follows
Phase I: Auto registered images A and B from different sensors are individually subjected to NSCT, Band pass coefficients are obtained as shown in Fig. 1a. Further directional components are obtained by one more level decomposition as shown in the Fig. 1B. The final decomposed components are one approximated and 8 detailed directional components after 3 levels of decomposition.
Phase II: Apply fusion rules to the decomposed coefficients.
Rules for approximation coefficients:
Vi = maximum(VA (i, j)) + maximum(VB (i, j)) (8)
where Vi is a new fused approximation coefficient VA (i, j) and VB (i, j) are approximation coefficients of source images A and B ; (i, j) is the positional co-ordinates.
Rules for directional coefficients:
Vj = avgmask(VA (i, j)) + avgmask(VB (i, j)) (9)
where Vj are fused directional coefficients. VA (i, j) and VB (i, j) are the directional components of the image A and B. Phase III: The final output is obtained by reconstructing Vi and Vj using inverse NSCT.
Fig. 2. (a) Landsat image of Sakurjima eruption recorded by OLI sensor;(b) Landsat image recorded by TIRS sensor19; (c) Low resolution multi-spectral image;(d) High resolution PAN image20; (e) Radarsat-2 intensity band image;(f) Radarsat-2 amplitude band image21.
Fig. 3. Results of the proposed fusion method on respective datasets.
5. Experiments and Results
To test the proposed systems six different types of imageries are used. Four from passive sensors and two from active sensors as shown in Fig. 2. Comparative analysis of the proposed method with the wavelet based fusion method13 is done, to study the merits of the proposed system. To evaluate proposed system the results of visual and quantitative analysis are considered. The efficiency of the proposed system in the fusion product is analysed by the classification results. [h] Visual results of our proposed system is depicted through Fig. 3.
6. Quantitative Analysis
Performance evaluation is carried out by computing MSE and PSNR
6.1 Mean square error
The mean square error (MSE) for the fused product should be minimal for good results. The formula for getting the mean square error is given as
MSE=r;uruisuj)-Fuj)f (10)
S(i, j) and F(i, j) are source image and final fused image respectively. m x n is the dimensions.
6.2 Peak signal to noise ratio
Peak signal noise ratio (PSNR) for the fused product should be maximum for good results. The formula for getting the PSNR value is given as
PSNR=mogw— (11)
Table 1. Mean square error.
Band Band 1 Band 2 Band 3
_NSCT_Wavelet_NSCT_Wavelet_NSCT_Wavelet
Data 1 0.9269 1.0269 0.9469 1.1469 0.9130 1.3130
Data 2 1.9635 2.4718 1.9473 2.1542 1.9745 2.2911
Data 3 0.1328 0.3636 0.1328 0.3636 0.1328 0.3636
Table 2. Peak to signal noise ratio.
Band Band 1 Band 2 Band 3
_NSCT_Wavelet_NSCT_Wavelet_NSCT_Wavelet
Data 1 48.0154 43.0911 47.5357 42.1103 46.9480 41.0111
Data 2 44.9848 40.0207 45.0190 39.7979 44.9616 39.5304
Data 3 56.8979 52.5247 56.8979 52.5247 56.8979 52.5247
The quantitative analysis for the proposed method against the wavelet based method is made on the basis of MSE and PSNR. The Tables 1 and 2 shows the corresponding MSE and PSNR values.
From Table 1 and 2 MSE and PSNR values respectively it is evident that the proposed method gives better fusion results as compared to wavelet based method.
6.3 Classification results
To witness the effectiveness of the proposed system the classification is carried out in the single image and fused image. The classification results identifies, whether the complementary information increases the quality of supplementary information present in the data or deteriorates. The fused product should have subjective and objective qualities for pattern recognition. The classification is carried out using Isodata method on Data 1.
Classification Parameters
• No of classes considered = 6 classes
• maximum iterations = 3
• deviation = 0.9
• Distance = 4
Classification result on data 1 is shown in Fig. 4a and the classification result of the corresponding fused product is shown in Fig. 4b.
7. Discussion
The bold faced results shows the superiority of the method in evaluation. It is observed from the experimental results quantitatively from Table 1 and 2 proposed methodology provides an improved results than wavelet based method. The visual results provided in Fig. 3 shows that the proposed system has an improvised results on the datasets. It is observed from the visual results the presence of artifacts is not identified. By taking a close look on the visual results 3 there is a substantial improvement on the contrast. The classification results improvement on the fused image against the single sensor image shows the effectiveness of the fusion method. The proposed system takes the advantage of NSCT and produced visually and quantitatively better results. The proposed system is having the issue of high computational cost and it can be resolved by using GPU based system.
8. Conclusion
The proposed method for the fusion of multi-sensor satellite images using non subsampled contourlet transform is successfully implemented and tested in real time data sets. The proposed system maintains the trade-off between the spectral and spatial resolutions to a good extent with the help of Multi - Resolution Analysis (MRA). The drawbacks of wavelet is addressed with the help of Non subsampled contourlet transform. The NSCT helps to retain the intrinsic structural information while decomposing and reconstructing the image components. It is evidently proved from the experimental results and analysis the proposed system is superior to the wavelet based method and other conventional method.
9. Acknowledgement
The authors would like to thank the anonymous reviewers for their valuable reviews. We are grateful to Mr.Manjunath Bhandary, Founder of Sahyadri Educational Institutes, Mangalore for his constant support towards the ongoing research work.
References
[1] K. Rani and R. Sharma, Study of Different Image Fusion Algorithm, International Journal of Emerging Technology and Advanced Engineering, ISSN 2250-2459, ISO 9001: 2008 Certified Journal, vol. 3, Issue 5, (2013).
[2] L. Alparone, S. Baronti, A. Garzelli and F. Nencini, Landsat Etm+ and Sar Image Fusion Based on Generalized Intensity Modulation, IEEE Transactions on Geoscience and Remote Sensing, vol. 42(12), pp. 2832-2839, (2004).
[3] P. Lombardo, C. J. Oliver, T. M. Pellizzeri and M. Meloni, A New Maximum - Likelihood Joint Segmentation Technique for Multitemporal Sar and Multiband Optical Images, IEEE Transactions on Geoscience and Remote Sensing, vol. 41(11), pp. 2500-2518, (2003).
[4] W. Tao, R. Qinan, C. Xi, N. Lei and R. Xiangwei, Highway Bridge Detection Based on pca Fusion in Airborne Multiband High Resolution sar Images, In International Symposium on Image and Data Fusion (ISIDF), 2011, IEEE, pp. 1-3, (2011).
[5] W. Hao-quan and X. Hao, Multi-Mode Medical Image Fusion Algorithm Based on Principal Component Analysis, In International Symposium on Computer Network and Multimedia Technology, 2009, CNMT 2009, IEEE;pp. 1-4, (2009).
[6] I. Misra, R. K. Gambhir, S. M. Moorthi, D. Dhar and R. Ramakrishnan, An Efficient Algorithm for Automatic Fusion of Risat-1 Sar Data and Resourcesat-2 Optical Images, In: 4th International Conference on Intelligent Human Computer Interaction (IHCI), 2012, IEEE;pp. 1-6, (2012).
[7] F. A. Al-Wassai, N. Kalyankar and A. A. Al-Zuky, The ihs Transformations Based Image Fusion, arXiv Preprint arXiv:11074396, (2011).
[8] J. Choi, J. Yeom, A. Chang, Y. Byun and Y. Kim, Hybrid Pansharpening Algorithm for High Spatial Resolution Satellite Imagery to Improve Spatial Quality, Geoscience and Remote Sensing Letters, IEEE, vol. 10(3), pp. 490-494, (2013)
[9] L. Zhang, H. Shen, W. Gong and H. Zhang, Adjustable Model-Based Fusion Method for Multispectral and Panchromatic Images, IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 42(6), pp. 1693-1704, (2012).
[10] N. Han, J. Hu and W. Zhang, Multi-Spectral and Sar Images Fusion Via Mallat and a Trous Wavelet Transform, In 18th International Conference on Geoinformatics, 2010, IEEE, pp. 1-4, (2010).
[11] B. Aiazzi, S. Baronti, F. Lotti and M. Selva, A Comparison Between Global and Context - Adaptive Pansharpening of Multispectral Images, Geoscience and Remote Sensing Letters, IEEE, vol. 6(2), pp. 302-306, (2009).
[12] S. Nikolov, P. Hill, D. Bull and N. Canagarajah, Wavelets for Image Fusion, In Wavelets in Signal and Image Analysis, Springer, pp. 213-241, (2001).
[13] P. R. Mangalraj and A. Agrawal, An Efficient Method Based on Wavelet for Fusion of Multi-Sensor Satellite Images, In 1st International Conference on Electrical, Computer and Communication Technologies 2015, IEEE, pp. 1058-1062, (2015).
[14] S. Yang, F. Sun, M. Wang, Z. Liu, L. Jiao, Novel Super Resolution Restoration of Remote Sensing Images Based on Compressive Sensing and Example Patches - Aided Dictionary Learning, In International Workshop on Multi-Platform/Multi-Sensor Remote Sensing and Mapping (M2RSM), 2011, IEEE, pp. 1-6, (2011).
[15] S. Vekkot and P. Shukla, A Novel Architecture for Wavelet Based Image Fusion, World Academy of Science, Engineering and Technology, vol. 57, pp. 372-377, (2009).
[16] V. Prabhu and S. Mukhopadhyay, A Multi-Resolution Image Fusion Scheme for 2d Images Based on Wavelet Transform, In 1st International Conference on Recent Advances in Information Technology (RAIT), 2012, IEEE, pp. 80-85, (2012).
[17] M. N. Do and M. Vetterli, The Contourlet Transform: An Efficient Directional Multiresolution Image Representation, IEEE Transactions on Image Processing, vol. 14(12), pp. 2091-2106, (2005).
[18] A. L. Da Cunha, J. Zhou and M. N. Do, The Nonsubsampled Contourlet Transform: Theory, Design, and Applications, IEEE Transactions on Image Processing, vol. 15(10), pp. 3089-3101, (2006).
[19] Data1, Landsat Data of Oli and Tirs Sensors, 2013. URL: http://www.usgs.gov;Accessed: (20-10-2013).
[20] Data2, Multispectral and Pan Imagery, 2013. URL: http://www.satimagingcorp.com/gallery/quickbird/;Accessed: (25-12-2013).
[21] Data3, Radarsat 2, 2013. URL: http://www.asc-sa.gc.ca/eng/satellites/radarsat2/;Accessed: (25-12-2013).
[22] J. Nunez, X. Otazu, O. Fors, A. Prades, V. Pala and R. Arbiol, Multiresolution-Based Image Fusion with Additive Wavelet Decomposition, IEEE Transactions on Geoscience and Remote Sensing, vol. 37(3), pp. 1204-1211, (1999).
[23] Fusion, Imagefusion in Satellite Images, 2012. URL: http://www.ece.lehigh.edu/SPCRL/IF/image_ fusion;Accessed: (30-06-2014).