Scholarly article on topic 'Effects of atmospheric correction and pansharpening on LULC classification accuracy using WorldView-2 imagery'

Effects of atmospheric correction and pansharpening on LULC classification accuracy using WorldView-2 imagery Academic research paper on "Earth and related environmental sciences"

CC BY-NC-ND
0
0
Share paper
Keywords
{LULC / "Remote sensing" / "Object-based image analysis" / "Pixel-based image analysis" / "Maximum likelihood classifier (MLC)" / "Support vector machine (SVM)"}

Abstract of research paper on Earth and related environmental sciences, author of scientific article — Chinsu Lin, Chao-Cheng Wu, Khongor Tsogt, Yen-Chieh Ouyang, Chein-I Chang

Abstract Changes of Land Use and Land Cover (LULC) affect atmospheric, climatic, and biological spheres of the earth. Accurate LULC map offers detail information for resources management and intergovernmental cooperation to debate global warming and biodiversity reduction. This paper examined effects of pansharpening and atmospheric correction on LULC classification. Object-Based Support Vector Machine (OB-SVM) and Pixel-Based Maximum Likelihood Classifier (PB-MLC) were applied for LULC classification. Results showed that atmospheric correction is not necessary for LULC classification if it is conducted in the original multispectral image. Nevertheless, pansharpening plays much more important roles on the classification accuracy than the atmospheric correction. It can help to increase classification accuracy by 12% on average compared to the ones without pansharpening. PB-MLC and OB-SVM achieved similar classification rate. This study indicated that the LULC classification accuracy using PB-MLC and OB-SVM is 82% and 89% respectively. A combination of atmospheric correction, pansharpening, and OB-SVM could offer promising LULC maps from WorldView-2 multispectral and panchromatic images.

Academic research paper on topic "Effects of atmospheric correction and pansharpening on LULC classification accuracy using WorldView-2 imagery"

Information Processing in Agriculture

China Agricultural University

Accepted Manuscript

Effects of Atmospheric Correction and Pansharpening on LULC Classification Accuracy using WorldView-2 Imagery

Chinsu Lin, Chao-Cheng Wu, Khongor Tsogt, Yen-Chieh Ouyang, Chein-I Chang

PII: DOI:

Reference:

S2214-3173(15)00013-X http://dx.doi.org/10.1016/j.inpa.2015.01.003 INPA 20

To appear in:

Received Date: Revised Date: Accepted Date:

3 August 2014 7 January 2015 27 January 2015

Please cite this article as: C. Lin, C-C. Wu, K. Tsogt, Y-C. Ouyang, C-I. Chang, Effects of Atmospheric Correction and Pansharpening on LULC Classification Accuracy using WorldView-2 Imagery, (2015), doi: http://dx.doi.org/ 10.1016/j.inpa.2015.01.003

This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Effects of Atmospheric Correction and Pansharpening on LULC Classification Accuracy

using WorldView-2 Imagery

Chinsu Lin Corresponding author. Professor, Department of Forestry and Natural Resources, National Chiayi Uni University Rd, Chiayi 60004, Taiwan. E-mail: chinsu@mail.ncyu.edu.tw,

/, Tel: 88

Chao-Cheng Wu

Assistant Professor, Department of Electrical Engineering, National Taipei University of Technology, 1 Zhongxiao E. Rd., Sec. 3, Taipei 10608, Taiwan. E-mail:

ccwu@mail.ntut.edu.tw ^

Khongor Tsogt

Post-Doctoral Research Fellow, Department of Forestry and Natural Resources, National Chiayi University, 300 University Rd, Chiayi 60004, Taiwan. E-mail: khongor@gmail.com

Yen-Chieh Ouyang

Professor, Department of Electrical Engineering, National Chung Hsing University, 250, Kuo Kuang Rd., Taichung 402 , Taiwan. E-mail: ycouyang@dragon.nchu.edu.tw

Chein-I Chang

Professor, Department of Computer Science and Electrical Engineering, UMBC. 1000 Hilltop Circle, Baltimore, MD 20250. USA. E-mail: cchang@umbc.edu

Abstract

Changes of Land Use and Land Cover (LULC) affect atmospheric, climatic, and biological spheres of the earth. Accurate LULC map offers detail information for resources management and intergovernmental cooperation to debate global warming and biodiversity reduction. This paper examined effects of pansharpening and atmospheric correction on LULC classification. Object-Based Support Vector Machine (OB-SVM) and Pixel-Based Maximum Likelihood Classifier (PB-MLC) were applied for LULC classification. Results showed that atmospheric correction is not necessary for LULC classification if it is conducted in the original multispectral image. Nevertheless, pansharpening plays much more important roles on the classification accuracy than the atmospheric correction. It can help to increase classification accuracy by 12% on average compared to the ones without pansharpening. PB-MLC and OB-SVM achieved similar classification rate. This study indicated that the LULC classification accuracy using PB-MLC and OB-SVM is 82% and 89% respectively. A combination of atmospheric correction, pansharpening, and OB-SVM could offer promising LULC maps from WorldVie\ w-2 m ultispectral and panchromatic images.

;w-2 mult

Key words: LULC, remote sensing, object-based image analysis, pixel-based image analysis, Maximum Likelihood Classifier (MLC), Support Vector Machine (SVM)

[17 T81,

Introduction

Changes of land use and land cover (LULC) affect atmospheric, climatic, and biological spheres of the Earth [1]-[3]. Carbon emission has been more and more significant since last decades, leading global warming and extreme climate events. The changes of LULC could be caused by natural and/or anthropogenic disturbances such as stochastic events (storms and forest fires), landslides, and deforestations as well as the climate-change-derived influences. Fortunately, due to the photosynthesis, the plants are able to capture and store the carbon dioxide (i.e., carbon sequestration) which helps to reduce impacts of global warming. Therefore, constant monitors of terrestrial ecosystems play an important role in success of sustainable forest management and particularly carbon emission reduction caused by deforestation and degradation of forest ecosystem.

Carbon sequestration is generally presented in biomass [4], [5] or net primary production (NPP) [6], [7]. The amounts uptakes from or releases to the atmosphere through plants' photosynthesis (the gross primary production, GPP) or respiration (Ra) respectively. Specifically, NPP is positive if GPP is larger than Ra while negative inversely. Changes of NPP can greatly affect global carbon balance and climate change [6], which has been a key issue of ecological studies since last decades [8].

Recently, many studies have concerned prediction of regional NPP [9]-[16], and many studies indicated that the potential of carbon sequestration could be achieved by land management practices, such as sustainable timber production and farm afforestation [5],

]-[19]. Because of constant change of NPP among terrestrial ecosystems or LULC types [8], an accurate LULC map is very important to support a precise estimation of NPP or carbon sequestration. The high resolution LULC maps play critical roles on the issues of: 1) reducing emission from deforestation and forest degradation (REDD) [20]-[22], 2) presenting accurate large-scale LULC map and predicting LULC changes [23]-[25], 3) detecting the response of

vegetation to environmental factors and estimating the spatiotemporal variations of NPP at multiple spatial scales [8], 4) predicting land surface temperature [26], and 5) calculating the large-scale/subcanopy-based heterogeneous evapotranspiration [27].

The recent remote sensing technologies could provide simultaneously high-resolution (meter-scale) multispectral image (MS) and very-high-resolution (submeter-scale) panchromatic image (PAN). The MS and PAN images can be integrated by pansharpening techniques to produce submeter-scale MS image. However, a potential problem of noise might come from pansharpening due to heterogeneous components in the area of MS image pixels [28] and this problem might lower accuracy of biophysical parameters estimation. The noise problem would be worse in landscapes with complicated LULC or high-variant-density vegetation canopies because measurement of biophysical parameters have nonlinear mixes of two or more materials (e.g., soil and vegetation canopy) [28].

The effects of atmospheric correction or pansharpening were demonstrated in many literature studies. Their applications included LULC mapping [29], [30], forest volume estimation [31], land surface temperature [32], [33], and coastal dynamics [34]. Nevertheless, few of them have concerned the relationship between the LULC classification accuracy and pansharpening or atmospheric correction. Therefore, the objectives of this manuscript are to examine the interaction effect of atmospheric correction and pansharpening processing on LULC classification accuracy. This paper utilized WorldView-2 multispectral and panchromatic images to conduct systematic comparisons of LULC mapping using

ixel-Based Maximum Likelihood Classifier (PB-MLC) [35] and Object-Based Support Vector Machine (OB-SVM) [36].

Materials and methods

2.1 Materials

A WorldView-2 image taken at UTC time 02:48:38.20 on November 30, 2011 was used for this study. It contains an 8-bands multispectral image with spatial resolution of 2.0 m per pixel and 1-band panchromatic image with spatial resolution of 0.5 m per pixel. Spectral specifications of the multispectral image are Coastal: 400~450 nm, Blue: 450~510 nm, Green: 510~580 nm, Yellow: 585~625 nm, Red: 630~690 nm, Red Edge: 705~745 nm, Near Infrared 1 (NIR 1): 770~895 nm, Near Infrared 2 (NIR 2): 860~1040 nm, and the single band of panchromatic image is 450~800 nm. Figure 1 demonstrates the geographical location of the study site in southern Taiwan. The types of IPCC (Intergovernmental Panel on Climate Change) LULC contained in this area are forest, grassland, farm, wetland, residential/urban, and bareland. Due to the high resolution of the WorldView-2 image, detail of LULC could be revealed by visual image interpretation. The total classes could be therefore divided into 10 classes, which are forest, grassland, farm (cropping farm), facility farm (protected-culture farm or greenhouse-based farming system), river and lake (two subclasses of wetland), urban, bareland, and stone (riverbed) and sandy soil (two subclasses of bareland).

{Figur

Figure 1 could be inserted here}

gain trans

2.2 Image processing

The original WorldView-2 multispectral image is delivered in 16-bit formatted digital number (DN). In this manuscript, the DN image was used to restore the radiance image by the

in and offset of each band that accompanied with the image header file. Then, a radiation transfer model-based algorithm [37]-[39] bundled as the FLAASH (Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes) module in ENVI Software is applied to correct the atmospheric effects for the reflectance image. FLAASH model has been demonstrated that it works well in the atmospheric correction of multispectral image [40] and hyperspectral

image [41], [42]. Finally, the method "PanSharp" using least squares algorithm [43] that bundled in PCI Software is applied for image fusion. As a result, we have totally four experimental images, which include DN with/without pansharping and reflectance with/without pansharpening. In the following steps, training samples of the LULC classes would be selected by random sampling method and evaluated by transformed divergence. PB-MLC and OB-SVM would be carried out for the classification. Overall accuracy and kappa coefficient of the classifications are further tested using factorial ANOVA. Figure 2 presented the overall flowchart of the proposed framework.

{Figure 2 could be inserted

ted here}

[-based

2.2.1 Atmospheric correction

As indicated in the radiation transfer model-based atmospheric correction algorithm, the spectral radiance (L*) can be expressed as the function of three components in Equation (1). The first term is the radiance that is reflected from the surface and travels directly into the sensor. The second term is the adjacency effect which is the radiance form the surface that is scattered by the atmosphere into the sensor, and the third term is the radiance backscattered by the atmosphere without reaching the surface. In Equation (1), pis the pixel surface reflectance,

Pc is an average surface reflectance for the surrounding region, S is the spherical albedo of

the atmosphere (capturing the backscattered surface-reflected photons), a is the radiance

backscattered by the atmosphere without reaching the surface, and A, B are surface independent coefficients that vary with atmospheric and geometric conditions.

L = Ap/(1 -pcS) + Bpc /(1 -pcS) + La

2.2.2 Pansharpening

Pansharpening, also called image fusion, refers to the process to integrate the geometric detail of a high spatial resolution panchromatic image with spectral (color) information from a low spatial resolution multispectral image [43]-[45]. Although several related algorithms had been developed in the past, only few are widely used for commercial or industrial purposes. According to a comparable study of Zhang and Mishra [46] and Du et al. [47], the PanSharp technique bundled in the software PCI provides the best fusion quality for all the sensors, including IKONOS, GeoEye-1, QuickBird, and WorldView satellites. The PCI-PanSharp algorithm is based on the least number of squares to approximate pixel-value relationship between the original MS image and PAN image for the image fusion. Due to the least squares technique, PanSharp is able to find the best fit from the spectral values of the image bands being fused and to adjust contribution of individual band for reduction of color distortion. Besides it also eliminates the problem of dataset dependency (i.e. reduce the influence of dataset variation) [46]. Thus, the PCI-PanSharp technique was therefore directly applied in this manuscript to fuse the multispectral image and the panchromatic image.

2.3 Object-based classification by support vector machine

2.3.1 Extraction of LULC objects using multiresolution segmentation method

This study concerned the mapping of forest, grassland, farm, facility farm, bareland, stone,

sand, river, lake, and urban areas. The multiresolution segmentation algorithm [48] was first

lied to construct objects in which the 8 bands of the WorldView-2 image were used simultaneously with equal weighting such as 1.0. Since these LULC objects normally have big variations in spectral features, a two-stage segmentation was applied to extract the objects of LULC in the study site. In the first stage, the multiresolution segmentation was implemented to extract the urban, bareland, stone, sandy soil, and grassland using the

app sim

parameter values of 30, 0.9 and 0.5 for the scale, color, and compactness respectively. A small level of scale helps precisely delineating small objects which particularly happen at the areas of high developed areas. As a result, the urban, bareland, stone, and sandy soil could be well segmented. However, other types of LULC that generally have big variation of spectral values will be over-segmented. In the second stage, objects segments were merged and re-segmented using a large level of scale to extract the forest, farm, facility farm, lake, and river. The second stage took 350, 0.9, and 0.5 as the parameter of scale, color, and compactness respectively. Basically, the value of the scale parameter depends on the dynamic range of spectral values and can be determined based on a series of training and learning processes. Image objects are shaped based on local variations of spectral values. Therefore, in contrast to the shape, spectral information of objects can be more deterministic to classification. While for cases of objects with similar spectral information, shape can contribute marginal effect in the determination of object attributes. The value of color and compactness can also be determined by training and learning for a particular satellite image.

2.3.2 Example-based object classification using SVM

Examples of LULCs' objects were then selected via visual image interpretation to derive the objective features for Support Vector Machine (SVM). Those features applied in the example-based classification contained spectral, textural, and spatial attributes. Please refer to for detail of the feature attributes. This procedure is called Object-Based SVM sification (OB-SVM).

Table 1

Table 1. Attributes definition of the object features used in the LULC classification. (Please refer to the attachment: Table1.docx)

Support Vector Machine (SVM) is a machine learning algorithm based on linear discriminant for binary classification problems. The major advantage of SVM is that it does not require many training samples for reliable statistical characteristics of each class. Only a few key training samples are required to form a hyperplane. All the testing samples will be

for Lag

further classified based on the hyperplane by dividing their boundaries in feature space and assigning each sample into the predicted class based on which side they fall on. Fora supervised binary classification (only two classes will be mapped) problem, if the training data are represented by jx,, y,}, i= 1, 2, ..., N, and y, e {-1, 1}, where N is the number of

training samples, y,=+1 for class ^ and y, =-1 for a2. There should have at least a hyperplane (linear or non-linear) that can separate two classes. As shown in figure 3, the pixels that place at the edge of the class are the support vectors used for the feature training. If a pixel that locates beyond the optimal margin, for example, the two black dots and the two

white circles mixed in the opposite feature will be misclassified. / ''

The generalized binary SVM classifier (Equations 2-3) maps the input vector x into a high-dimensional feature space and then constructs the optimal separating hyperplane in that

space. In Equation (2), the i are the Lagrange multipliers; yi are the labels of classes (+1

for class ^ and -1 for class Xi are the support vectors that correspond to non-zero

Lagrange multipliers; and x is the input vector (candidate pixel) that need to determine its

class label; w° is the bias or error of the hyperplane fitting; K(Xi, x) is the kernel function (Equation 3) that gives the weights of nearby data points in estimating target classes.

7igure 3 could be inserted here]

f(x) = ^AiyiK(xi,x) + wo iyi = 0 , ^n • 1 o

i=1 , i=i and ^i *0' 2 =1 2' •••' N• (2)

K(Xi, x) = exp(-y||Xi - x\\2) y> 0 (3)

The most popular kernels are polynomial, radial basis function (RBF), and sigmoid. As indicated by Hsu et al. [50], the RBF kernel is a reasonable choice because it maps samp into a higher dimensional space and can handle the case when the relation between class labels and attributes is nonlinear. Based on the research results of Huang et al. [51] and Mercier and Lennon [49], Tzotsos and Argialas [36] also indicated that the RBF works very effective and accurate in classification of remote sensing data. The strength of RBF kernel has been further demonstrated recently in several articles [50], [52]-[53]. In addition, RBF-based SVM has been applied to detect fire scars in forest land [54], [55] and land cover classification [56] with satisfied results. This manuscript therefore adopted the RBF as the SVM kernel for LULCs training and classification.

In general, the LULC classification deals with the mapping problem of more than two classes. The binary SVM classifier can work as a multiclass classifier by combining several binary SVM classifiers in several methods, such as one-against-all, one-against-one, and directed acyclic graph (DAG). Hsu and Lin [57] had shown that one-against-one and DAG methods are more suitable for practical use than one-against-all, so the one-against-one method was applied to the object-based LULC classification in this study. Suppose that there sses in an image, there at least k(k-1)/2 pairs of binary SVM classifiers should be plied to each pair of classes. In this study, the binary SVM classification for all of the possible pairs between any of two classes were done where the values for the RBF kernel parameters, i.e., gamma (y) and penalty (C) were assigned 0.01 and 100 respectively based on a prior learning; and then the Max-Wins Policy was exploited to determine each class of the candidate objects.

method arekc, app

2.4 Pixel-based classification by maximum likelihood classifier

Maximum Likelihood Classifier (MLC) is the most popular statistical classifier used for the standard of evaluation. Since the classification of MLC is made on individual pixels, the LULC classification using the MLC classifier in this manuscript is therefore called the Pixel-Based MLC (PB-MLC). Training samples of variant LULC were collected visual image interpretation for signature training and further evaluated using the transformed divergence to determine the suitable signatures of LULCs for classificatio

2.5 Determination of test samples for classification accuracy assessment

ed based on ing the transfor

fication.

Suppose that the sampling error rate ( a ) is 0.01, the study site has the number of pixels N, which has 2,864,958 pixels for image with 2m resolution or 49,439,808 pixels for image with 0.5 m resolution. According to the maximum coefficient of the band variation in multispectral image (CV), the minimum number of test samples, which is 10,749 pixels in our case could be determined by Equation (4).

A stratified random sampling without replacement was carried out to collect test samples for accuracy assessment. In the sampling process, 100 random points were first randomly selected as the seeds. A minimum requirement for each class should has at least 10 points in order to overcome the prevalence effect [58], [59], and then the test samples were determined expanding each of seeds to an area with a window size of 19x19 pixels. A total number of ,876 test pixels were applied for the assessment of LULC classification accuracy. In the stratified random sampling of test samples, a sample previously selected as training sample would be excluded as test sample. Ground truth in the extended area is assigned pixel by pixel.

n > N ■ tl(CV)2

N ■ E2 + t2a(CV)2 (4)

Briefly, the classification was first conducted based on the 10-classes (forest, grassland, farm, facility farm, urban, bareland, stone, sandy soil, lake, and river), and then a post classification was applied to combine the subclass(es) to generate the LULC map. For example, the classes, stone, sandy soil, and bareland would be recoded to barelan

lake and river would be recoded to wetland since their attributes are identical. While the farm

cal. W

and facility farm were kept as two separated classes due to different attributes. Classification

results of both OB-SVM and PB-MLC were evaluated by the indices, overall accuracy (OA) and kappa coefficient. The accuracy assessment was made on the test samples which were collected by stratified random sampling where the stratum is determined according to the image-interpreted information of LULCs.

Results

con 199

3.1 Spectral separability of LULC training samples

The pixel samples collected for training are forest, grassland, urban, lake, river, bareland, sandy soil, stone, farm, and fac-farm (facility farm) (Table 2). Some of the LULC classes were divided into several sub-classes because they are separable by a priori knowledge. The transformed divergence (TD) is 2000 in the upper triangle entries of Table 2 indicating each corresponding pair has very good separability in the original multispectral image. On the

ntrary, the lower triangle entries have worse separability with the TD ranging from 1157 to 1999 in the pansharpened multispectral image. The worse performance could probably be caused by many factors, such as mixed pixels, physical and/or biophysical status of the classes. In contrast to the spectral confusion in pansharpened image shown in Table 2, the spectral separability can be obviously improved if atmospheric correction is applied prior to the

pansharpening process. This improvement can be seen in each of the lower triangle entries in Table 3, which has a transformed divergence 2000 indicating the LULC.

To summarize the combined effects of both atmospheric correction and pansharpening in the view of spectral separability, pansharpening would lower the spectral separability of classes while the atmospheric correction would increase. Practically, it is not necessary to implement atmospheric correction if a classification was the only concern. Nevertheless, the atmospheric correction was required to obtain for the following object determination, so it should be implemented prior to the pansharpening process.

terminai

Table 2. Transformed divergence matrices of LULC training samples for the original digital number images - before atmospheric correction processing (upper triangle: non-pansharpened, lower triangle: pansharpened) (Please refer to the attachment: Table2.doc

»rocessing

Table 3. Transformed divergence matrixes of LULC training samples for the reflectance image - after atmospheric correction processing (upper triangle: non-pansharpened, lower triangle: pansharpened) (Please refer to the attachment: Table3.doc)

Generalized spectral characteristics of the LULC classes

artially level-up of the spectral curves in the digital number image were shown in Fig. 4(a).

The atmospheric effects were successfully removed by atmospheric correction using FLAASH with the radiation transfer model. The derived reflectance curves of forest, grassland, wetland, urban, bareland, and farm would be closer to their typical curves after atmospheric correction (Fig. 4(b)). Vegetational LULCs (forest, farm, and grassland) have

typical absorption features in blue and red band, and plateau in near infrared region. In the region from red to near infrared, a reflectance curvature can be defined to identify the grassland from forest and farm. Briefly, grassland shows a larger reflectance in the visible region with a lower reflectance in the near infrared region. As a result, grassland shows a larger curvature than farm and forest. Farm shows a higher reflectance than forest along the visible-near infrared region, and a larger curvature in red-near infrared region than forest. Although urban and bareland have very similar reflectance curve, urban significantly shows a higher amount of reflectance in the visible-near infrared region. Wetland/water body shows a very good absorption in near infrared which can be useful to distinguish the wetland from the other kinds of LULCs.

Discussion

)arison foi

ould be inserted

{Figure 4 could be inserted here}

4.1 Classification accuracy comparison for variant processing and classifiers

Figure 5 (a-b) shows the classified LULC map derived from the original DN image without pansharpening. It is found that many pixels in the center portion of the image were classified as residential/urban (the red class). This situation was improved by pansharpening which significantly upgrades the pixel resolution to 0.5 m. A critical block pattern in lower right of the study area can be seen from Fig. 5(c) using PB-MLC. This result could be her improved due to the clear block pattern examined by OB-SVM as shown in Fig. 5(d). Bareland surrounding the residential/urban area is another example to determine the strength of OB-SVM in Fig. 5(d).

The class confusion matrix is shown in Table 4 along with error matrix of LULC classes in Table 5. As it can be seen in Table 6, the OA of LULC Classification using PB-MLC was

portion Bar

between 78-81% and 79-82% for the original image and atmospheric corrected image. The accuracy difference between the image with or without atmospheric correction was around 1 %. The insignificant improvement showed that atmospheric correction provided limited benefits. It was due to the spectral separability of between-classes is originally excellent in the DN image (Tables 2 and 3). The result also demonstrated that atmospheric correction caused no negative influence to the LULC classification at the same level of image resolution. In the cases of OB-SVM classification, the OA difference between the images with or without atmospheric correction was also around 1%, which agrees with the results of PB-MLC. This fact is duplicated by kappa coefficient and thus indicates that atmospheric correction plays an insignificant role in LULC classification. V

Table 4. Error matrix of the PB-MLC classification on atmospherically-corrected and pansharpened reflectance image

TaDle 5. Error matrix or the ob-svm classification on atmospherically-corrected and pansharpened reflectance image

Table 6 tabulates variant effects of pansharpening on LULC classification in corresponding to the classifiers. An increment of OA around 3% and 15% was achieved by pansharpening with PB-MLC and OB-SVM respectively. This kind of amelioration effect can be reproduced with or without atmospherically correction. Again, the kappa coefficient also agrees with the

{Figure 5 couli

refer to the attachment: Table5.doc)

results of OA at an improvement of 3% and 18% respectively. For the classifiers' efficient point of view, this result indices OA and kappa coefficient achieved by OB-SVM is lower than PB-MLC at a level of 5% in the non-pansharpened with/without atmospheric correction; while on the contrary OB-SVM is higher than PB-MLC at a level of 7% in the pansharpned

image with/without atmospheric correction.

An object in an image is a group of pixels with similar spectral values that car

interpreted as an identifiable or a single material. If a real object with an extent smaller than the pixel size of an image, the object will be mixed or confused completely with the prevailing target(s) next to or surround with it. Furthermore, the object could be identified when very high resolution image was fused from the panchromatic band. Therefore, the accuracy difference was obvious with or without pansharpening. A two-factor factorial design, pansharpening (Factor A) and classifier (Factor B), can be used to examine statistically their joint effects. As shown in Table 7, the Factor B's effect has concluded to be insignificant at the 0.05 probability level. On the other hand, the effect of Factor A and the interaction of Factors A and B are significant at the same probability level. In brief, pansharpening processing increases spatial resolution of multispectral image and help to achieve a superior LULC classification accuracy. The average accuracy for the classification with pansharpened image is 84.1% which is significantly better than 74.1% without pansharpened image respectively (Table 8). Overall, the best accuracy is 87.5% and 80.6% achieved by OB-SVM and PB-MLC for the pansharpened image, then the second is 76.3% made by PB-MLC for the

-pansharpened image, and the third is 71.8% made by OB-SVM for the non-pansharpened image (Table 8).

non ima

Table 6. Comparison of LULC classification accuracy using variant processing and classifiers (Please refer to the attachment: Table6.doc)

Table 7. ANOVA table of the two-factor factorial experiment. (Please refer to the attachment: Table7.doc)

Table 8. Least significant difference (LSD) method determined grouping of the average

accuracy for classification being with/without pansharpening #. (Please refer to the attachment: Table8.doc)

ents and tempo Mpp,ng. ,

4.2 A regime of image processing for quantitative measurements and temporal change analysis

LULCs classification is an issue of descriptive attributes mapping. It is generally accomplished directly using the satellite image delivered in digital numbers (DN) or gray

levels. This kind of image analysis can be implemented by visual interpretation and digitization or automated digital analysis techniques. The analysis is generally performed by analyst generating a qualitative assessment on digital image in the field of environmental remote sensing and histopathology [60]. The image analysis is defined as qualitative-based analysis due to its qualitative outputs [61].

Due to the pixel DN of satellite image will be definitively influenced by daily atmospheric condition as it is being taken, the satellite images of a particular area captured on variant days are considerable from various sources. Without advanced image standardization such as

eric correction, the images would be not able to meet the consistency, a criterion of asuring the geospatial data quality in metric measurement [62]. Compare with the LULC classification, analysis of quantitative measurements and temporal changes of the environment have become the major issue of remote sensing since last decades. For example, the quantitative-analysis of the terrestrial ecosystem management involves at least biomass/volume/carbon stocks, land greenness coverage or leaf area index, ecosystem

atmosph

primary productivity, chlorophyll content, phenology, water deficiency or drought stress, fire risk and damage severity, and forest degradation etc. The analysis of such quantitative attributes is generally accomplished by numerical modeling techniques. Atmospheric correction offers standardized reflectance by removing the effects from atmospheric and surrounding effects. Therefore, it can meet the need of quantitative attributes prediction. Earth's science study using remote sensing images can be beneficial particularly for temporal biophysical parameters of the surfaces. For example, the prediction of fores volume/biomass/carbon stocks can be done precisely and logically using atmospheric-corrected-reflectance derived vegetation index such as NDVI (Normalized Difference Vegetation Index) and SAVI (Soil-Adjusted Vegetation Index). Furthermore, atmospheric correction would not induce negative impacts on LULC classification; it also can underpin the basic requirement of quantitative attributes analysis. So, a regime of satellite image processing that can be adopted for simultaneous use in LULC classification and analysis of quantitative measurements is therefore suggested to apply atmospheric correction for standardization and then pansharpening techniques for the spatial resolution improvement.

Conclusions

The LULC's classification accuracy of both classifiers PB-MLC and OB-SVM using WorldView-2 multispectral image is not related to atmospheric correction at the level of 2-meter image resolution. However, the accuracy can be significantly improved by

sharpening using the multispectral and panchromatic images at the level of 0.5-meter

pan ima

image resolution. If the multispectral image has been atmospheric corrected and pansharpened prior to classification, OB-SVM and PB-MLC can achieve the best classification with an overall accuracy of 89% and 82%. The difference between OB-SVM and PB-MLC is 7% which is identical to the one observed by Feyzizadeh and Helali [63] using OB-NNC

(Object-Based Nearest Neighbor Classifier) and PB-MLC. This result appears that support vector machine is more suitable than maximum likelihood classifier for LULC classification; it also agrees with the LULC classification made by of Srivastava et al. [64] using MODIS and TM images.

Although Duro et al. [65] concluded that there is no advantage to preferring one image analysis approach over another for the purposes of mapping broad land cover types in agricultural environments using medium spatial resolution earth observation imagery, we would suggest that the object-based support vector machine classifier can achieve satisfied accuracy of LULC classification with very high resolution multispectral image. Recall that the terrestrial ecosystems such as forest, farmland, and grassland are generally distributed with high spatio-temporal variations [8], [66]. In order to outreach the valuable applications of remote sensing images, it is recommended to have multispectral image radiometrically corrected and then be pansharpened. The resulted very-high-resolution reflectance image can be directly applied to carry out LULC classification and derive temporally quantitative information by image analysis. Most important, it can offer spatial details to account for precise information of the natural resources and environmental observations of the Earth.

Acknowledgement

The authors would like to thank Aerial Survey Office, Forest Bureau of Taiwan, ROC for their supports in both financial and data collection under the project 102AS-13.3.1-FB-e3.

heir su

References

[1] Feddema JJ, Oleson KW, Bonan GB, Mearns LO, Buja LE, Meehl GA, et al. The importance of land-cover change in simulating future climates. Science 2005;310:1674-8.

amani M, H

[2] Pielke Sr. RA. Land use and climate change. Science 2005;310:1625-26.

[3] Pielke Sr. RA, Pitman A, Niyogi D, Mahmood R, McAlpine C, Hossain F, et al. Land use/land cover changes and climate: modeling analysis and observational evidence. WIREs Clim Change 2011;2:828-50.

[4] Brown S, Lugo AE, Chapman J. Biomass of tropical tree plantations and its implications for the global carbon budget. Can J Forest Res 1986;16: 390-4. „V

[5] Lin C, Lin CH. Comparison of Carbon Sequestration Potential in Agricultural and Afforestation Farming Systems. Sci Agr 2013;70(2):93-101.

[6] Nemani RR, Keeling CD, Hashimoto H, Jolly WM, Piper SC, Tucker CJ, et al. Climate-driven increases in global terrestrial net primary production from 1982 to 1999. Science 2003;300:1560-63.

[7] Girardin CAJ, Malhi Y, Aragao LEOC, Mamani M, Huaraca-Huasco W, Durand L, et al. Net primary productivity allocation and cycling of carbon along a tropical forest elevational transect in the Peruvian Andes. Glob Change Biol 2010;16(12):3176-92.

[8] Lin C, Dugarsuren N. Deriving the temporal and spatial pattern of net primary productivity in terrestrial ecosystems of Mongolia using MODIS NDVI Imageries and CASA model. Photogramm Eng Rem S 2015;In Press.

[9] Schimel D, Melillo J, Tian H, McGuire AD, Kicklighter D, Kittel T, et al. Contribution of increasing CO2 and climate to carbon storage by ecosystems of the United States. Science 2000;287:2004-6.

] Nemani RR, White M, Thornton P, Nishida K, Reddy S, Jenkins S, et al. Recent trends in hydrologic balance have enhanced the terrestrial carbon sink in the United States. Geophys Res Lett 2002;29(10):106. [11] Ahl DE, Gower ST, Mackay DS, Burrows SN, Norman JM, Diak GR. The effects of aggregated land cover data on estimating NPP in northern Wisconsin. Remote Sens

on (SWR) in

Environ 2005;97:1-14.

[12] Turner DP, Ritts WD, Cohen WB, Gower ST, Running SW, Zhao M, et al. Evaluation of MODIS NPP and GPP products across multiple biomes. Remote Sens Environ 2006;102:282-92.

[13] Gehrung J, Scholz Y. The application of simulated NPP data in improving the assessment of the spatial distribution of biomass in Europe. Biomass Bioenerg 2009;33:712-20.

[14] Tang G, Beckage B, Smith B, Miller PA. Estimating potential forest NPP, biomass and their climatic sensitivity in New England using a dynamic ecosystem model. Ecosphere 2010;1(6):art18.

[15] Pachavo G, Murwira A. Remote sensing net primary productivity (NPP) estimation with the aid of GIS modelled shortwave radiation (SWR) in a Southern African Savanna. Int J Appl Earth Obs 2014;30:217-26.

[16] Yang H, Mu S, Li J. Effects of ecological restoration projects on land use and land cover change and its influences on territorial NPP in Xinjiang, China. Catena 2014;115:85-95.

[17] Hollinger DY, Maclaren JP, Beets PN, Turland J. Carbon sequestration by New Zealand's plantation forests. New Zeal J For Sci 1993;23:194-208.

[18] Isagi Y, Kawahara T, Kamo K, Ito H. Net production and carbon cycling in a bamboo Phyllostachys pubescens stand. Plant Ecol 1997;130: 41-52.

[19] Shively GE, Zelek CA, Midmore DJ, Nissen TM. Carbon sequestration in a tropical landscape: an economic model to measure its incremental cost. Agroforest Syst 2004;60:189-97.

[20] UN-REDD Programme. The UN-REDD Programme Strategy 2011-2015, The United Nations Collaborative Programme on Reducing Emissions from Deforestation and Forest Degradation in Developing Countries. Geneva, Switzerland: UN-REDD Programme Secretariat; 2009. p. 5-14.

[21] Salimon CI, Putz FE, Menezes-Filho L, Anderson A, Silveira M, Brown IF, et al. Estimating state-wide biomass carbon stocks for e REDD plan in Acre, Brazil. Forest Ecol Manag 2011;262:555-60.

[22] Yanai AM, Fearnside PM, de Alencastro Graga PML, Nogueira EM. Avoided

deforestation in Brazilian Amazoinia: Simulating the effect of the Juma Sustainable

Environ 2001 ;85 by remote sen 04:77-92. irtainty in

Development Reserve. Forest Ecol Manag 2012;282:78-91.

[23] Veldkamp A, Lambin EF. Predicting land-use change. Agr Ecosyst Environ 2001 ;85:1-6.

[24] de Paul Obade V, Lal R. Assessing land cover and soil quality by remote sensing and geographical information systems (GIS). Catena 2013;104:7

[25] Cockx K, Van de Voorde T, Canters F. Quantifying uncertainty in remote sensing-based urban land-use mapping. Int J Appl Earth Obs 2014;31:154-66.

[26] Setturu B, Rajan KS, Ramachandra TV. Land Surface Temperature Responses to Land Use Land Cover Dynamics. Geoinfor Geostat: An Overview 2013;1:4.

[27] Moffett KB, Gorelick SM. A method to calculate heterogeneous evapotranspiration using submeter thermal infrared imagery coupled to a stomatal resistance submodel. Water Resour Res 2012;48:W01545.

[28] Inamdar AK, French A, Hook S, Vaughan G, Luckett W. Land surface temperature retrieval at high spatial and temporal resolutions over the southwestern United States, J Geophys Res 2008;113:D07107.

[29] Kolios S, Stylios CD. Identification of land cover/land use changes in the greater area of the Preveza peninsula in Greece using Landsat satellite data. Appl Geogr 2013;40:150-60.

[30] Vanonckelen S, Lhermitte S, Van Rompaey A. The effect of atmospheric and topographic correction on pixel-based image composites: Improved forest cover detection in mountain environments. Int J Appl Earth Obs 2015;35:320-8.

[31] Berra EF, Fontana DC, Pereira RS. Accuracy of Forest Stem Volume Estimation by TM/Landsat Imagery with Different Geometric and Atmospheric Correction Methods. Int J Appl Sci Tech 2014;4(3):108-16.

[32] Ghosh A, Joshi PK. Hyperspectral imagery for disaggregation of land surface

temperature with selected regression algorithms over different land use land cover scenes.

ISPRS J Photogram 2014;96:76-93.

[33] Wu P, Shen H, Zhang L, Göttsche FM. Integrated fusion of multi-scale polar-orbiting and geostationary satellite observations for the mapping of high spatial and temporal resolution land surface temperature. Remote Sens Environ 2015;156:169-81.

[34] Deidda M, Sanna G. Pre-processing of high resolution satellite images for sea bottom

ron 2015; ution satellite in -95.

classification. Ital J Remote Sens 2012;44(1): 83-95

[35] Ahmad A, Quegan S. Analysis of Maximum Likelihood Classification on Multispectral Data. Appl Math Sci 2012;6(129):6425-36.

[36] Tzotsos A, Argialas D. Support Vector Machine Classification for Object-Based Image Analysis. In: Blaschke T, Lang S, Hay GJ, editors. Object-Based Image Analysis -Lecture Notes in Geoinformation and Cartography. Berlin, Germany: Springer Berlin Heidelberg; 2008, p. 663-77.

[37] Adler-Golden SM, Matthew MW, Bernstein LS, Levine RY, Berk A, Richtsmeier SC, et al. Atmospheric correction for short-wave spectral imagery based on MODTRAN4. In: Proc. SPIE, Imaging Spectrometry V, Denver CO, USA; 1999. vol. 3753. p.61-9.

] Matthew MW, Adler-Golden SM, Berk A, Richtsmeier SC, Levine RY, Bernstein LS, et al. Status of atmospheric correction using a MODTRAN4-based algorithm. In: Proc. SPIE, Algorithms for Multispectral, Hyperspectral, and Ultraspectral Imagery VI. 2000. vol. 4049. p.199-207.

[39] Keller J, Bojinski S, Prevot ASH. Simultaneous retrieval of aerosol and surface optical

properties using data of the Multi-angle Imaging SpectroRadiometer (MISR). Remote Sens Environ 2007;107:120-37. [40] Fernández-Mansoa O, Fernández-Manso A, Quintanoc C. Estimation of aboveground biomass in Mediterranean forests by statistical modelling of ASTER fraction images. Int J Appl Earth Obs 2014;31:45-56.

[41] Magendran T, Sanjeevi S. Hyperion image analysis and linear spectral unmixing 1 evaluate the grades of iron ores in parts of Noamundi, Eastern India. Int J Appl Earth Obs 2014;26:413-26.

[42] Moses WJ, Gitelson AA, Perk RL, Gurlin D, Rundquist DC, Leavitt BC, Barrow TM, Brakhage P. Estimation of chlorophyll-a concentration in turbid productive waters using airborne hyperspectral data. Water Res 2012;46:993-1004.

[43] Zhang Y. Understanding Image Fusion. Photogramm Eng Rem S 2004; 70(6):657-61.

[44] Wald L, Ranchin T, Mangolini M. Fusion of satellite images of different spatial resolutions: Assessing the quality of resulting images. Photogramm Eng Rem S 1997;63(6):691-9.

[45] Ranchin T, Wald L. Fusion of high spatial and spectral resolution images: The ARSIS concept and its implementation. Photogramm Eng Rem S 2000;66(1):49-61.

[46] Zhang Y, Mishra RK. A review and comparison of commercially available pan-sharpening techniques for high resolution satellite image fusion. In: Proc.

RSS'12. Proceedings of Geoscience and Remote Sensing Symposium (IGARSS), 2012 IEEE International. Munich, Germany; 2012. vol. 1, p.182-5.

[47] Du Q, Younan NH, King R, Shah VP. On the Performance Evaluation of Pan-Sharpening Techniques. IEEE Geosci Remote Sens Lett 2007;4(4):518-22.

[48] Baatz M, Schäpe A. Multiresolution segmentation - an optimizaiton approach for high quality multi-scale image segmentation. In: Strobl J et al. editors. Angewandte

Geographische Infor-mationsverarbeitung XII. Wichmann, Heidelberg, 2000; p.12-23. [49] Mercier G, Lennon M. Support vector machines for hyperspectral image classification with spectral-based kernels. In: Proc. IGARSS'03. Proceedings of Geoscience and Remote Sensing Symposium (IGARSS), 2003 IEEE International. Toulouse, France;

2003. vol. 1, p.288-90.

Technic

[50] Hsu CW, Chang CC, Lin CJ. A practical guide to support vector classification. Technical report. Department of Computer Science and Informaiton Engineering, National Taiwan University. 2010.

[51] Huang C, Davis LS, Townshend JRG. An assessment of support vector machines for land cover classification. Int J Remote Sens 2002;23(4):725-49.

[52] Widjaja D, Varon C, Dorado AC, Suykens JA, Van Huffel S. Application of kernel principal component analysis for single-lead-ECG-derived respiration. IEEE T Bio-med Eng 2012;59(4):1169-76.

[53] Gaspar P, Carbonell J, Oliveira JL. On the parameter optimization of Support Vector Machines for binary classification. J Integr Bioinf 2012;9(3):201.

[54] Ko BC, Cheong KH, Nam JY. Fire detection based on vision sensor and support vector machines. Fire Safety J 2009;44:322-9.

[55] Petropoulos GP, Kontoes C, Keramitsoglou I. Burnt area delineation from a uni-temporal perspective based on Landsat TM imagery classification using Support Vector Machines. Int J Appl Earth Obs 2011;13:70-80.

] Koetz B, Morsdorf F, van der Linden S, Curt T, Allgower B. Multi-source land cover classification for forest fire management based on imaging spectrometry and LiDAR data. Forest Ecol Manag 2008;256:263-71.

[57] Hsu CW, Lin CJ. A comparison of methods for multiclass support vector machines. IEEE T Neural Networ 2002;13:415-25.

[58] Vach W. The dependence of Cohen's kappa on the prevalence does not matter. J Clin Epidemiol 2005;58(7):655-61.

[59] Allouche O, Tsoar A, Kadmon R. Test the accuracy of species distribution models: prevalence, kappa and the true skill statistic (TSS). J Appl Ecol 2006;43:1223-32.

el-bas el-bas

a quality

[60] Apfeldorfer C, Ulrich K, Jones G, Goodwin D, Collins S, Schenck E, Richard V. Object orientated automated image analysis: quantitative and qualitative estimation of inflammation in mouse lung. Diagn Pathol 2008;3(Suppl 1):S16.

[61] Goldszal AF, Davatzikos C, Pham DL, Yan MX, Bryan NR, Resnick SM. An Image-Processing System for Qualitative and Quantitative Volumetric Analysis of Brain Images. J Comput Assist Tomo 1998;22(5):827-37.

[62] Xia J. Metrics to measure open geospatial data quality. Issues Sci Technol Librarianship 2012;68.

[63] Feyzizadeh B, Helali H. Comparison pixel-based, object-oriented methods and effective parameters in classification land cover/land use of west province Azerbaijan. Phys Geogr Res Q 2010;71:73-84.

[64] Srivastava PK, Han D, Rico-Ramirez MA, Bray M, Islam T. Selection of classification techniques for land use/land cover change investigation. Adv Space Res 2012;50:1250-65.

[65] Duro DC, Franklin SE, Dube MG. A comparison of pixel-based and object-based image analysis with selected machine learning algorithms for the classification of agricultural landscapes using SPOT-5 HRG imagery. Remote Sens Environ 2012;118:259-72

[66] Crabtree R, Potter C, Mullen R, Sheldon J, Huang S, Harmsen J, et al. A modeling and spatio-temporal analysis framework for monitoring environmental change using NPP as an ecosystem indicator. Remote Sens Environ 2009;113:1486-96.

Figure captions:

Fig. 1. Study site and the locations of test samples for classification accuracy assessment. Fig. 2. Image analysis and works flow applied for LULC classification. Fig. 3. Hyperplanes for binary SVM classifier with linear separable case (a) and non-linear separable case (b). (modified from [49])

Fig. 4. An example of the spectral curves of forest, grassland, wetland, urban, bareland, farm that derived from (a) the 16-bit original digital number image and (b) the atmospherically corrected 1000x-rescaled reflectance image.

Fig. 5. LULC classification map derived from the original 2m-pixel size DN image by the methods of (a) PB-MLC and (b) OB-SVM, and the ones derived from the atmospherically-corrected and pansharpened reflectance image by the methods of (c) PB-MLC and (d) OB-SVM.

Table captions:

Lv/VA,

Table 1. Attributes definition of the object features used in the LULC classification. Table 2. Transformed divergence matrices of LULC training samples for the original digital number images - before atmospheric correction processing (upper triangle: non-pansharpened, lower triangle: pansharpened)

Table 3. Transformed divergence matrixes of LULC training samples for the reflectano image - after atmospheric correction processing (upper triangle: non-pansharpened, lower triangle: pansharpened)

Table 4. Error matrix of the PB-MLC classification on atmospherically-corrected and pansharpened reflectance image

Table 5. Error matrix of the OB-SVM classification on atmospherically-corrected and pansharpened reflectance image

Table 6. Comparison of LULC classification accuracy using variant processing and classifiers Table 7. ANOVA table of the two-factor factorial experiment.

Table 8. Least significant difference (LSD) method determined grouping of the average accuracy for classification being with/without pansharpening #.

Table 1. Attributes definition of the object features used in the LULC classification.

Features

Attribute

Description

Spectral Spectral_Mean

Mean value of the pixels comprising the region in band i.

A shape measure that indicates the compactness of the polygon. A circle is the most compact shape with a value of 1 / pi. The compactness value of a square is 1/ 2(sqrt(pi)). Compactness = Sqrt (4 * Area / pi) / outer contour length. A shape measure that compares the area of the polygon to the square of the maximum diameter of the polygon. The "maximum diameter" is the length of the major axis of an oriented bounding box enclosing the polygon. The roundness value for a circle is 1, and the value for a square is 4 / pi. Roundness = 4 * (Area) / (pi * Major_Length ). A shape measure that compares the area of the polygon to the square of the total perimeter. The form factor value of a circle is 1, and the value of a square is pi / 4.

A shape measure that indicates how well the shape is described by a rectangle. This attribute compares the area of the polygon to the area of the oriented bounding box enclosing the polygon. The rectangular fit value for a rectangle is 1.0, and the value for a non-rectangular shape is less than 1.0. The angle subtended by the major axis of the polygon and the x-axis in degrees. The main direction value ranges from 0 to 180 degrees. 90 degrees is North/South, and 0 to 180 degrees is East/West.

Form Factor

Rectangular_Fit

Main Direction

Table 2. Transformed divergence matrices of LULC training samples for the original digital number images - before atmospheric correction processing (upper triangle: non-pansharpened, lower triangle: pansharpened)

Forest Sandy soil Grassland Lake River Stone Urban Bareland Farm Fac-farm

Forest 2000 2000 2000 2000 2000 2000 2000 2000 2000

Sandy soil 1979 2000 2000 2000 2000 2000 2000 2000 2000

Grassland 1999 1987 2000 2000 2000 2000 2000 2000 2000

Lake 1999 1998 1746 2000 2000 2000 2000 2000 2000

River 1999 1999 1561 1811 2000 2000 2000 2000 2000

Stone 1999 1999 1993 1996 1576 2000 2000 2000 2000

Urban 1999 1991 1752 1812 1157 1890 2000 2000 2000

Bareland 1999 1938 1873 1874 1925 1998 1522 2000 2000

Farm 1999 1998 1678 1954 1367 1831 1447 1713 2000

Fac-farm 1999 1999 1994 1997 1955 1778 1985 1999 1974

Table 3. Transformed divergence matrixes of LULC training samples for the reflectance image - after atmospheric correction processing (upper triangle: non-pansharpened, lower triangle: pansharpened)

Forest Sandy soil Grassland Lake River Stone Urban Bareland Farm Fac-farm

Forest 20000 2000 2000 2000 2000 2000 2000 1999 2000

Sandy soil 2000 2000 2000 2000 2000 2000 2000 2000 2000

Grassland 2000 2000 2000 2000 2000 2000 2000 2000 2000

Lake 2000 2000 2000 2000 2000 2000 2000 2000 2000

River 2000 2000 2000 2000 2000 2000 2000 2000 2000

Stone 2000 1999 2000 2000 2000 2000 2000 2000 2000

Urban 2000 2000 2000 2000 2000 2000 2000 2000 2000

Bareland 2000 2000 2000 2000 2000 2000 2000 2000 2000

Farm 1999 2000 1999 2000 2000 2000 2000 2000 2000

Fac-farm 2000 2000 2000 2000 2000 2000 2000 2000 2000

Table 4. Error matrix of the PB-MLC classification on atmospherically-corrected and pansharpened reflectance image

Producer's User's

Class Forest Grassland Wetland Urban Bareland Farm Fac-farm Total

accuracy accuracy

Forest 5521 2 0 0 140 0 0 5663 97.49 85.2

Grassland 0 2745 0 0 299 203 0 3247 84.54 62.5

Wetland 0 0 2749 0 0 0 0 2749 100.00 84.61

Urban 0 5 139 4335 73 1954 257 6763 64.10 99.20

Bareland 959 1573 0 5 6684 30 55 9306 71.82 91.04

Farm 0 67 361 13 8 4243 0 4692 90.43 65.96

Fac_farm 0 0 0 17 138 3 3298 3456 95.43 91.36

Total 6480 4392 3249 4370 7342 6433 3610 35876

Table 5. Error matrix of the OB-SVM classification on atmospherically-corrected and pansharpened reflectance image

Producer's User's

Class Forest Grassland Wetland Urban Bareland Farm Fac-farm Total

accuracy accuracy

Forest 6374 1 0 0 0 68 0 6443 98.93 98.36

Grassland 25 2731 0 1 13 347 0 3117 87.62 62.18

Wetland 0 0 2875 37 0 0 0 2912 98.73 88.49

Urban 0 0 0 3686 280 0 7 3973 92.78 84.35

Bareland 4 62 372 633 5950 39 37 7097 83.84 92.49

Farm 77 1595 0 8 4 6886 8 8578 80.28 93.79

Fac_farm 0 3 2 5 186 2 3558 3756 94.73 98.56

Total 6480 4392 3249 4370 6433 7342 3610 35876

Table 6. Comparison of LULC classification accuracy using variant processing and classifiers

Original DN Image Reflectance Image

Classification (without atmospheric correction) (atmospherically corrected) methods

Pansharpeni

Overall Kappa Overall Kappa Overall Kappa Overall Kappa

without with without with

Pansharpening Pansharpening Pansharpening Pansharpening

PB-MLC

78.32% 0.7434 81.05% 0.7756 78.99%

OB-SVM 73.03% 0.6848 88.40% 0.8622 75.

'5.17% 0

^appa Ove

82.44% 0.7920

.7056 89.36°%* 0.8735

* and # indicates the better accuracy for the classified 5(d).

tied L t

C map shown in Fig. 5(c) and

Table 7. ANOVA table of the two-factor factorial experiment.

Sources

Sum of Square df Mean Square F Sig. Probability

10.337

Sharpening (A) 0.060

Classifier (B) A * B Error Total

0.000 0.005 0.010 10.348

2.584 0.060 0.000 0.005 0.001

2967.660 <0.001

68.928 <0. 0.459

5.627 0.035

<0.001 ^^ 0 511

Table 8. Least significant difference (LSD) method determined grouping of the average accuracy for classification being with/without pansharpening #.

Without pansharpening With pansharpenin

PB-MLC OB-SVM PB-MLC OB-SVM Interaction (A*B) Average accuracy 0.763 0.718 0.806 0.875

Grouping b

Effect of Factor A Average accuracy 0.741 0.841

Grouping a b

# Different alphabets in the grouping of "Interaction" and "Effect of Factor A" indicate the corresponding average accuracy between the items is statistically different at the 0.05 probability level.

Table 1. Attributes definition of the object features used in the LULC classification.

Features

Attribute

Description

Spectral Spectral_Mean

Mean value of the pixels comprising the region in band i.

A shape measure that indicates the compactness of the polygon. A circle is the most compact shape with a value of 1 / pi. The compactness value of a square is 1/ 2(sqrt(pi)). Compactness = Sqrt (4 * Area / pi) / outer contour length. A shape measure that compares the area of the polygon to the square of the maximum diameter of the polygon. The "maximum diameter" is the length of the major axis of an oriented bounding box enclosing the polygon. The roundness value for a circle is 1, and the value for a square is 4 / pi. Roundness = 4 * (Area) / (pi * Major_Length ). A shape measure that compares the area of the polygon to the square of the total perimeter. The form factor value of a circle is 1, and the value of a square is pi / 4.

A shape measure that indicates how well the shape is described by a rectangle. This attribute compares the area of the polygon to the area of the oriented bounding box enclosing the polygon. The rectangular fit value for a rectangle is 1.0, and the value for a non-rectangular shape is less than 1.0. The angle subtended by the major axis of the polygon and the x-axis in degrees. The main direction value ranges from 0 to 180 degrees. 90 degrees is North/South, and 0 to 180 degrees is East/West.

Form Factor

Rectangular_Fit

Main Direction

Table 2. Transformed divergence matrices of LULC training samples for the original digital number images - before atmospheric correction processing (upper triangle: non-pansharpened, lower triangle: pansharpened)

Forest Sandy soil Grassland Lake River Stone Urban Bareland Farm Fac-farm

Forest 2000 2000 2000 2000 2000 2000 2000 2000 2000

Sandy soil 1979 2000 2000 2000 2000 2000 2000 2000 2000

Grassland 1999 1987 2000 2000 2000 2000 2000 2000 2000

Lake 1999 1998 1746 2000 2000 2000 2000 2000 2000

River 1999 1999 1561 1811 2000 2000 2000 2000 2000

Stone 1999 1999 1993 1996 1576 2000 2000 2000 2000

Urban 1999 1991 1752 1812 1157 1890 2000 2000 2000

Bareland 1999 1938 1873 1874 1925 1998 1522 2000 2000

Farm 1999 1998 1678 1954 1367 1831 1447 1713 2000

Fac-farm 1999 1999 1994 1997 1955 1778 1985 1999 1974

Table 3. Transformed divergence matrixes of LULC training samples for the reflectance image - after atmospheric correction processing (upper triangle: non-pansharpened, lower triangle: pansharpened)

Forest Sandy soil Grassland Lake River Stone Urban Bareland Farm Fac-farm

Forest 20000 2000 2000 2000 2000 2000 2000 1999 2000

Sandy soil 2000 2000 2000 2000 2000 2000 2000 2000 2000

Grassland 2000 2000 2000 2000 2000 2000 2000 2000 2000

Lake 2000 2000 2000 2000 2000 2000 2000 2000 2000

River 2000 2000 2000 2000 2000 2000 2000 2000 2000

Stone 2000 1999 2000 2000 2000 2000 2000 2000 2000

Urban 2000 2000 2000 2000 2000 2000 2000 2000 2000

Bareland 2000 2000 2000 2000 2000 2000 2000 2000 2000

Farm 1999 2000 1999 2000 2000 2000 2000 2000 2000

Fac-farm 2000 2000 2000 2000 2000 2000 2000 2000 2000

Table 4. Error matrix of the PB-MLC classification on atmospherically-corrected and pansharpened reflectance image

Producer's User's

Class Forest Grassland Wetland Urban Bareland Farm Fac-farm Total

accuracy accuracy

Forest 5521 2 0 0 140 0 0 5663 97.49 85.2

Grassland 0 2745 0 0 299 203 0 3247 84.54 62.5

Wetland 0 0 2749 0 0 0 0 2749 100.00 84.61

Urban 0 5 139 4335 73 1954 257 6763 64.10 99.20

Bareland 959 1573 0 5 6684 30 55 9306 71.82 91.04

Farm 0 67 361 13 8 4243 0 4692 90.43 65.96

Fac_farm 0 0 0 17 138 3 3298 3456 95.43 91.36

Total 6480 4392 3249 4370 7342 6433 3610 35876

Table 5. Error matrix of the OB-SVM classification on atmospherically-corrected and pansharpened reflectance image

Producer's User's

Class Forest Grassland Wetland Urban Bareland Farm Fac-farm Total

accuracy accuracy

Forest 6374 1 0 0 0 68 0 6443 98.93 98.36

Grassland 25 2731 0 1 13 347 0 3117 87.62 62.18

Wetland 0 0 2875 37 0 0 0 2912 98.73 88.49

Urban 0 0 0 3686 280 0 7 3973 92.78 84.35

Bareland 4 62 372 633 5950 39 37 7097 83.84 92.49

Farm 77 1595 0 8 4 6886 8 8578 80.28 93.79

Fac_farm 0 3 2 5 186 2 3558 3756 94.73 98.56

Total 6480 4392 3249 4370 6433 7342 3610 35876

Table 6. Comparison of LULC classification accuracy using variant processing and classifiers

Original DN Image Reflectance Image

Classification (without atmospheric correction) (atmospherically corrected) methods

Pansharpeni

Overall Kappa Overall Kappa Overall Kappa Overall Kappa

without with without with

Pansharpening Pansharpening Pansharpening Pansharpening

PB-MLC

78.32% 0.7434 81.05% 0.7756 78.99%

OB-SVM 73.03% 0.6848 88.40% 0.8622 75.

'517% 0

^appa Ove

or«»

82.44% 0.7920

.7056 89.36°%* 0.8735

* and # indicates the better accuracy for the classified 5(d).

tied L t

C map shown in Fig. 5(c) and

Table 7. ANOVA table of the two-factor factorial experiment.

Sources

Sum of Square df Mean Square F Sig. Probability

10.337

Sharpening (A) 0.060

Classifier (B) A * B Error Total

0.000 0.005 0.010 10.348

2.584 0.060 0.000 0.005 0.001

2967.660 <0.001

68.928 <0. 0.459

5.627 0.035

<0.001 ^^ 0 511

Table 8. Least significant difference (LSD) method determined grouping of the average accuracy for classification being with/without pansharpening #.

Without pansharpening With pansharpenin

PB-MLC OB-SVM PB-MLC OB-SVM Interaction (A*B) Average accuracy 0.763 0.718 0.806 0.875

Grouping b

Effect of Factor A Average accuracy 0.741 0.841

Grouping a b

# Different alphabets in the grouping of "Interaction" and "Effect of Factor A" indicate the corresponding average accuracy between the items is statistically different at the 0.05 probability level.