Scholarly article on topic 'An Efficient Kernel Optimization Method for Radar High-Resolution Range Profile Recognition'

An Efficient Kernel Optimization Method for Radar High-Resolution Range Profile Recognition Academic research paper on "Electrical engineering, electronic engineering, information engineering"

0
0
Share paper
Keywords
{""}

Academic research paper on topic "An Efficient Kernel Optimization Method for Radar High-Resolution Range Profile Recognition"

Hindawi Publishing Corporation EURASIP Journal on Advances in Signal Processing Volume 2007, Article ID 49597, 10 pages doi:10.1155/2007/49597

Research Article

An Efficient Kernel Optimization Method for Radar High-Resolution Range Profile Recognition

Bo Chen, Hongwei Liu, and Zheng Bao

National Key Laboratory for Radar Signal Processing, Xidian University, Xi'an 710071, Shaanxi, China Received 15 September 2006; Accepted 5 April 2007 Recommended by Christoph Mecklenbrauker

A kernel optimization method based on fusion kernel for high-resolution range profile (HRRP) is proposed in this paper. Based on the fusion of li -norm and l2-norm Gaussian kernels, our method combines the different characteristics of them so that not only is the kernel function optimized but also the speckle fluctuations of HRRP are restrained. Then the proposed method is employed to optimize the kernel of kernel principle component analysis (KPCA) and the classification performance of extracted features is evaluated via support vector machines (SVMs) classifier. Finally, experimental results on the benchmark and radar-measured data sets are compared and analyzed to demonstrate the efficiency of our method.

Copyright © 2007 Bo Chen et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

1. INTRODUCTION

Radar automatic target recognition (RATR) is to identify the unknown target from its radar-echoed signatures. Target high-range-resolution profile contains more detail target structure information than that of low-range-resolution radar echoes, so it plays an important role in RATR community [1-4]. As is known, radar HRRP is a strong function of target aspect, and serious speckle fluctuation may exist when target-radar orientation changes, which makes HRRP RATR a challenge task. In addition, target may exist at any position in real system, thus the position of an observed HRRP in a time window will vary between measurements, and this time-shift variation should be compensated when performing classification [1-4].

Kernel methods have been applied successfully in solving various problems in machine learning community. A kernelbased algorithm is a nonlinear version of linear algorithm where, through a nonlinear function O(x), the input vector x has been previously transformed to a higher dimensional space F in which we only need to compute inner products (via a kernel function). The attractiveness of such algorithms stems from their elegant treatment of nonlinear problems and their efficiency in high-dimensional problems. For the HRRP recognition, there exists complex nonlinear relation between targets due to the noncooperation and maneuvering characteristic of targets. Therefore, kernel methods cannot be

directly applied to recognition unless the above three problems of influencing HRRP recognition can be solved, which will significantly improve the classification performance [1].

Given two input vectors x and y, their inner product in feature space, F, can be written in the form of kernel K as

K(x,y) = (O(x) ■ O(y)). (1)

The popular kernel functions are Gaussian kernel K(x, y) = exp(-y||x-y||2)withy > 0 and polynomial kernel K (x, y) = ((x ■ y) + 1)p with p <E N. The choice of the right embedding is of crucial importance, since each kernel will create a different structure in the embedding space. The ability to assess the quality of an embedding is hence a crucial task in the theory of kernel machines. Recently, Xiong et al. [5] propose an alternate method for optimizing the kernel function by maximizing a class separability criterion in the empirical feature space. In this paper, we give an extension of the method which can fuse multiple kernel functions. Then for the HRRP recognition, the proposed method is employed to combine the two Gaussian kernels based on l1-norm and l2-norm distance to eliminate the speckle fluctuation. Unlike other kernel mixture model, in our method every element of a kernel matrix has a different coefficient because of the use of data-dependent kernel [6], which is the reason that we call it fusion kernel. To show its performance, the method is applied to optimize the kernel of KPCA for HRRP RATR proposed by [1].

Finally, the classification performance of features extracted by optimized KPCA is evaluated via support vector machines (SVMs) [7] based on the benchmark and radar-measured HRRP datasets.

2. PROPERTIES OF RADAR HRRP

The radar works in the optics region, and the electromag-netism characteristics of targets can be described by the scattering center target model, which is widely used and also proved to be a suitable target model in SAR and ISAR applications. An HRRP is the coherent sum of time returns from target scatterers located within a range resolution cell, which represents the distribution of target scattering centers along the radar line of sight [3]. The mth complex returned echo in the nth range cell can be written as

Xn(m) = o„j exp

/ 4nRn,i(m) i

H A +Un

where In denotes the number of target scatterers in the nth range cell, Rn,j(m) denotes the distance between radar and the ith scatterer in the mth sampled echo, a„j and 0n,j denote the amplitude and initial phase of the ith scatterer echo, respectively.

If the target orientation changes, its HRRP will be changed subsequently. Two phenomena are responsible for it. The first is the scatterer's motion through range cell (MTRC). Given target rotation angle larger enough, the scat-terers range variation will be larger than a range resolution cell, thus make the HRRP changed. Apparently the target rotation angle, which leads to MTRC, is subjective to the range resolution of radar and target-cross length. The second phenomenon is the HRRP's speckle effect. Since an HRRP is the coherent summation of multiple scatterers echoes in one range cell, even the target rotation angle meets the condition of target rotation angle limitation to avoid the occurring of MTRC, the phase of each scatterer echo will be changed, thus their coherent summation will be changed subsequently.

If MTRC occurs, it means that the target scattering center model changed. In this case, it is required more templates to represent the target HRRPs. As to the speckle effect, an effective method of HRRP similarity scalar is needed to eliminate its influence on recognition performance, such as the li-norm distance [8].

3. FUSION KERNEL BASED ON Z1-NORM AND / -NORM GAUSSIAN KERNELS

3.1. /1-norm and /2-norm Gaussian Kernels

Due to complicated nonlinear relations between radar targets, empirically Gaussian kernel is chosen to perform HRRP recognition, which is proved by the empirical results in [1]. As the above, radar HRRP has the property of speckle effect especially for propeller-driven aircraft, the running propeller of which modulates the echoes and leads to the great fluctuation of echoes. Usually, l2-norm Gaussian kernel is used,

which includes a square operation and augments the influence of the elements of large value in a vector, which will also enhance the effect of speck fluctuation on recognition. Since [8] shows us that l1-norm distance criterion can decrease the fluctuation produced by propeller, l1-norm Gaussian kernel can eliminate the speckle effect of HRRP,

K(X1(t),X2(t)) = exp ( - y\\X1(t) - X2(t)\\h), (3)

where X1(t) andX2(t) denote two individual HRRPs, and y is a kernel parameter, which can be determined by a particular criterion.

However, the useful information of HRRP exists in only a part of all range cells and the rest are noise signal. Although l1-norm distance can eliminate the speckle effect, the side lobes also have been driven up, which means the increase of the interference of the noise to signal. Whereas l2-norm distance can work well on decreasing the noise effect, so we expect to combine the two Gaussian kernels based on different scales to learn a kernel function adaptive to HRRP data. In the next section, a kernel optimization method will be given.

3.2. Kernel optimization based on fusion Kernel in the empirical feature space

Although kernel-based methods such as KPCA [ 1 ] can represent complex nonlinear relations among targets, the choices of kernels and the kernel parameters still greatly influence the classification performance. Obviously, a poor choice will degrade the final results. Ideally, we select the kernel based on our prior knowledge of problem domain and restrict the learning to the task of selecting the particular pattern function in the feature space defined by the chosen kernel. Unfortunately, it is not always possible to make the right choice of kernel a priori. Furthermore, there is no general kernel suitable to all datasets. Therefore, it is necessary to find a data-dependent objective function to evaluate kernel functions. The method by Xiong et al. [5] employs a data-dependent kernel similar to that used in [6] as the objective kernel to be optimized. In this section, we firstly review the kernel optimization method.

3.2.1. Kernel optimization based on the single Kernel (SKO)

Given a two-class training data (x1, z1), (x2, z2),..., (xm,zm) e Rd X {±1}, where xz e Rd is the zth sample and zz e {±1} the label corresponding to xz. Given two data samples x and y, a data-dependent kernel function is used,

k(x, y) = q(x)q( y)ko(x, y),

where x,y e Rd, k0(x,y), called the basic kernel, is an ordinary kernel such as a Gaussian or polynomial kernel, and q( ■) is a factor function of the form

q(x) = a0 + ^ ajk1 (x, a),

where k1(x,az) = e Y1"x flj'"2, {aj e Rd, i = 1,2,...,n}, called the "empirical cores," can be chosen from the training data

or local centers of the training data, and a;'s are the combination coefficients which need normalizing. According to [9,10], evidently the data-dependent kernel satisfies the Mercer condition for a kernel function.

The kernel matrices corresponding to k(x, y) and k0(x, y) are denoted by K and K0, so (4) can be rewritten as

K = QKoQ,

where Q is a diagonal matrix, whose diagonal elements are {q(x1), q(x2),..., q(xm)}. We denote the vectors (q(x1), q(x2),..., q(xm))T, and (a0, a1,..., an)T by q and a, respectively. Then, we have

ki (xx, ai) 1 ki (x2, ai)

\1 ki (xm, ai)

ki (xi, a„)\ /ao\ ki (x2, a„) ai

ki (xm, an) / \anJ

Here, the following quantity for measuring the class separability is used as the kernel quality function in the empirical feature space

trace( Sb) trace (Sw) '

where Sb = X2=1 pi(pi - - P)T is the "between-class scatter matrix" and Sw = (1/n)Xn= 1 (x; - p)(xi - p)T the "within-class scatter matrix," p is the global mean vector, is the mean vector of ith class, and pi = ni/n is the prior of ith class. It is obvious that optimizing the kernel through J means increasing the linear separability of training data in feature space so that the performance of kernel machines is improved.

Now for the sake of convenience, we assume that the first m1 data belong to class C1, that is, z; = 1, i < m1, and the remaining m2 data belong to C2(m1 + m2 = m). Then, the kernel matrix can be written as

Kii Ki: K21 K:

where Kx1, Ki2, K2i, and K22 represent the submatrices of K of order m1 X m1, m1 X m2, m2 X m1, m2 X m2, respectively. Now we can construct two kernel scatter matrices in the feature space as the following matrices:

(kii 0

—K22, m2

1 K X m

— K22 /

^^ Kii m1

—K22, m2

Similarly, matrices B0 and W0 correspond to the basic kernel K0. According to [3, Theorem 1], we can use the kernel scatter matrices to represent J,

imBim q(a)rB0q(a)

imw 1m q(a)rW0q(a)'

where 1m is the vector of ones of the length m.

To maximize J(a), the standard gradient approach is employed and an updating equation for maximizing the class separability J is given by the following:

a(n+i) = a(n) + n

KTB0K1

^a(n)) W0^a(n

J (a(n

KÎW0K1

^a(n)) W0^a(n))

n is the learning rate and to ensure the convergence of the algorithm, a gradually decreasing learning rate is adopted,

n(t) = n^1 - N),

where n0 is the initial learning rate, N denotes a prespecified number of iterations, and t the current iteration number.

We utilize artificially-generated data in two dimensions in order to illustrate graphically the influence of kernel functions on classification. Both class 1 (denoted by "*") and class 2 ("◦") were generated from mixtures of two Gaussians by Ripley [11] with the classes overlapping to the extent that the Bayes error is around 8.0% and the linear SVM error is 10.5%.

KPCA was used to extract feature with three initial "bad guesses" of kernel matrices (Gaussian kernels with y = 10 and y = 0.25, polynomial kernel with p = 2. For notation simplification: the three kernels were respectively noted as G10, G0.25, and P2) which were all normalized. Linear SVM was used as a classifier. Figure 1(a) shows the original distribution with 125-example training set (randomly chosen from original Ripley's 250). Figure 1(b) shows the projection of training set in the Gaussian kernel (y = 10) induced feature space. The test error (the associated 1000-example test set) for KPCAG10 (26.8%) is far inferior to the original without KPCA which means it is a mismatched kernel. Figure 1(c) shows the projection of the training set through KPCAG0.25. The test error for KPCAG0.25 (10.3%) is slightly inferior to the original, which means a matched kernel. Figure 1(d) shows the projection of the training set through KPCAp2. The test error for KPCAp2 (10.7%) is slightly inferior to the original. Figures 1(e), 1(f), 1(g) show the projections of the training set after SKO-KPCAG10, SKO-KPCAG0.25, and SKO-KPCAp2. A value of yx of the function k1(^, ■) in (3) for SKO-KPCA was selected using cross-validation (CV). 50 fifty centers were selected to form the empirical core set {a;}. The initial learning rate n0 was set to 0.01 and the total iteration number N = 200. The test errors for SKO-KPCAG10 (25.6%), SKO-KPCAG025 (9.4%),

0.8 0.6 0.4 0.2 0 -0.2

1 -0.5 0 0.5 The first dimension

•c S s^ e do

0.8 0.6 0.4 0.2 0 0.2 0.4 0.6

-0.7 -0.5 -0.3 -0.1 0.1 0.3 0.5 0.7 The first principle component

0.4 0.3 0.2 0.1 0 0.1 0.2 0.3

it * * fr * f * w

Je>@* -,0 ' Tj o

oo®or

-0.6 -0.4 -0.2 0 0.2 0.4 0.6 The first principle component

pn do np

0.6 0.4 0.2 0 0.2 0.4 0.6

-0.8 -0.6-0.4-0.2 0 0.2 0.4 0.6 0.i The first principle component

0.8 0.6 0.4 0.2 0 0.2 0.4 0.6

-0.7 -0.5 -0.3 -0.1 0.1 0.3 0.5 The first principle component

C.O Q(S!) O IJ o° isfej. ^Jfefcy

41 - CiJf^ Vo® ° o * * * * if

-0.15 -0.1 -0.05 0 0.05 0.1 0.15 The first principle component

irn ne rp en

0.15 0.1 0.05 0

-0.05 -0.1 -0.15 -0.2 -0.25

The first principle component

Figure 1: Ripley's Gaussian mixture data set and its projections in the empirical feature space onto the first two significant dimensions. (a) The original training data set. (b)-(d) two-dimensional projections of the original training data set, respectively, in G10, G0.25, and P2 kernel induced feature space. (e)-(g) two-dimensional projections of the original training data set, respectively, in G10, G0.25, and P2 kernel induced feature space after the single kernel optimization.

and SKO-KPCAP2 (10.1%) were superior to those before kernel optimization.

However, we can see that the performance of SKO method is very dependent on and limited by the initial selected kernel. Which kernel function should be selected to be optimized, Gaussian kernel or other ones? How can we learn a better kernel matrix from different kernels corresponding to different physical interests adaptive to the input data? These problems are difficult to handle for the SKO method. In the next section, we generalize the SKO method to a kernel optimization algorithm based on fusion kernel (FKO).

3.2.2. Kernel optimization based on fusion Kernel (FKO)

According to (11), the fusion kernel quality function jfusron can be written as

fusion

1 T Dfusion 1 1mB 1m

imw fusioni„

£li qTB^q, ZL=1 qTW^q, '

/1 k1 (x1, a^ 1 k (x2, a^

V1 k (xm, a1)

k1 (x1, an) \ k1 (x2, a^

k1 xm , an

= K1a(i).

It is evident that the method by Xiong et al. [5] is effective to improve the performance of the kernel machines, since the targets are linearly separable in the feature space. Also the experimental results in [5] prove it valid. Nevertheless, the kernel optimization method is based on the single kernel, which means that if a basic kernel function K0 is chosen beforehand, we have to optimize the kernel based on the single embedding space, the optimization capability will be limited consequentially. To generalize the method we extend it to propose a more general kernel optimization approach combining with the idea of fusion kernel mentioned in the above.

If we choose L kernel functions, (6) can be represented as

The matrices b0') and W0i) correspond to the basic kernel K^. a(i) is the combination coefficient vector corresponding to k0°.

Therefore, (17) can be derived as

fusion = fB'r^q qT W0usionq '

fusion

B01) o o b02)

0 0 0 B,

K = X QiK^Qi, i=1

where K0(i) is the ith basic kernel, Qi is the factor matrix corresponding to K?. B and W are modified as

w0usion

Bfusion = X Bi,

Wfusion ^ ^r Wi,

= [q! q2 ■■■ q^ =

K1 o o K1

(1) (2)

fusion fusion

/-L Ki?

/k(1) 0

1 K(i)

o k(2)

km(i)m

( 1 K(i) 1 K(i)\

—An —K12

ir(0 1 K(i)

f- k11) m1 11

and ^1fuslon is an Lm X L(n + 1) matrix, afusion is a vector of length L(n +1).

Obviously, we can find that the form of (19) is the same as that of the right-hand side of (11), so through (12), our result can also be given by the following

fusion "(n+1)

fusion

Kfusion TBfusionKfusion fusion

(n) + n I fusion T fusion

fusion)1 T4/fusion,, ( „ fusion) qla(n) f W0 qla(n) f

{ refusion \ T T/4/"fusion refusion J fusion lK1 ) W0 K1

( fusion) T T.1 /-fusion ( fusion qla(n) I W0 qla(n)

fusion a(n) .

0.04 -0.

The first principle component

50 100

Combination coefficient indices

Figure 2: The results of Ripley's data after FKO. (a) Two-dimensional projection of the training data in the optimized feature space. (b) the combination coefficients «fusion.

When L = 1, it is just the single kernel optimization.

Figure 2(a) shows the projection of the training set in the empirical feature space after the FKO. The parameters were the same as those of the single kernel optimization. The classifier was still linear SVM. The test error (9.0%) demonstrates the improvement of the performance of the classification. Meanwhile, shown in Figure 2(b) are the combination coefficients of three initial kernels, afusion. From Figure 2(b), we can clearly find that after kernel optimization, the combination coefficients of the mismatched kernel G0.1 have been far less than other ones. Equivalently, our method automatically selected G4 and P2 kernels to be optimized, which both match the Ripley's data for classification.

4. EXPERIMENTAL RESULTS 4.1. Benchmark datasets

In order to evaluate the performance of our method, we firstly test it on the four-benchmark datasets, namely, the ionosphere, Pima Indians diabetes, liver disorder, wisconsin breast cancer (WBC, where the 16 database samples with missing values have been removed) which are downloaded from the UCI benchmark repository [12]. Except Pima data with training and test sets, in order to evaluate the true performance, other data are randomly partitioned into two equal and disjoint parts which are respectively used as training and test sets.

As the above, kernel optimization methods were applied to KPCA. Linear SVM classifier was utilized to evaluate the classification performances. We used a Gaussian kernel

function, a polynomial kernel function K2(x,y) = ((xT • y) + 1)p, and a linear kernel function K3(x,y) = xT • y as initial basic kernel matrices. And all kernels were normalized. Firstly, the values of kernel parameters for the three kernel functions of KPCA without kernel optimization were respectively selected by 10-fold cross-validation. Then the chosen kernel functions were applied as the basis kernels in (4). ffi's in (5) were also selected using 10-fold cross-validation. 20 local centers were selected to form the empirical core set {a}. The initial learning rate n0 was set to 0.08 and the total iteration number T was set to 400. Meanwhile, the procedure of determining the parameters of SKO was the same as FKO.

Experimental results on benchmark data are summarized in Table 1. It is evident that FKO can further improve the classification performance and at least as the same as the SKO method. The combination coefficients of three kernels in the four experiments has also been illustrated in Figure 3. We find that the combination coefficients of FKO are dependent on the classification performance of the corresponding kernel in SKO. As shown in Figure 3, the better the kernels can work after the optimization of SKO, the larger the combination coefficients of FKO are. Apparently, FKO can automatically combine three fixed parameter kernels.

4.2. Measured high-resolution range profile (HRRP) radar data set

The data used to further evaluate the classification performance are measured from a C band radar with bandwidth of 400 MHz. The radar high range resolution profile (HRRP) data of three airplanes, including An-26, Yark-42, and Cessna

0.2 0.15

£ 0.1

I 0.05

-0.05 -0.1 -0.15 -0.2

10 20 30 40 50 Combination coefficient indices

„ 0.04 nt

I 0.02

J -0.02

-0.04 -0.06 -0.08

i" ;> ; iii / i1 n.M 'i i ■ -i

i 'M /', ' i ' i ; 1

1 ',/ V 1, / ■ Î ! V

i' / \ / i.

f v > - ; ' \ ï ; < , \

Il 1 \ I

il I ;

10 20 30 40 50 Combination coefficient indices

G 0.02

° -0.02

i , i\

I « i m

\ 1 ' V i

L' ' \ (

10 20 30 40 50 60 Combination coefficient indices

0.12 0.1 0.08 0.06 0.04 0.02

10 20 30 40 50 60 Combination coefficient indices

Figure 3: The combination coefficients corresponding to four datasets. (a) BCW; (b) pima; (c) liver; (d) ionosphere.

Citation S/II, are measured continuously when the targets are flying. The projections of target trajectories onto the ground plane are shown in Figure 4. The measured data of each target are divided into several segments, the training data and test data are chosen from different data segments, respectively, which means, the target orientations corresponding to the test data and training data are different, the maximum elevation difference between the test data and training data is about 5 degrees. The 2nd and the 5th segments of Yark-42, the 5th and the 6th segments of An-26, the 6th and the 7th

segments of Cessna Citation S/II are chosen as the training data the total number of which is 300, all the rest data segments are chosen as tests data, the total number of which is 2400. And in the kernel optimization, 50 local centers from the training data are used as empirical cores. Additionally, the original HRRPs are preprocessed by power transformation (PT) to improve the classification performance, which is defined as

7(t) = X(t)v, 0 < v < 1, (22)

(a) Yark-42

-20 -15 -10 Km

(b) An-26

Figure 4: The projection of target trajectories onto the ground plane. (a) Yark-42, (b) An-2

'■ \7 ) y

4 —' 1 1

-20 -15 -10 -5 Km

(c) Cessna Citation A/II

Yark-42, (b) An-26, (c) cessna citation S/II.

Table 1: The comparison of recognition rates of different methods in different experiments. K1, K2, andK3, respectively, correspond to Gaussian, polynomial, and linear kernels.

K1 K2 K3

KPCA 88.96% 90.45% 88.58%

BCW KPCA with SKO 88.96% 96.94% 97.1%

KPCA with FKO — 97.33%

KPCA 73.72% 64.15% 63.48%

Pima KPCA with SKO 73.72% 66.21% 64.63%

KPCA with FKO — 74.10%

KPCA 71.19% 69.47% 66.19%

Liver KPCA with SKO 74.67% 73.36% 73.47%

KPCA with FKO — 75.17%

KPCA 93.11% 93.11% 89.73%

Ionosphere KPCA with SKO 93.11% 93.55% 89.38%

KPCA with FKO — 93.55%

where X(t) denotes an individual HRRP. The reason that using PT can improve the classification performance is explained as that the nonnormality distributed original HRRPs will become near normality distribution after PT, thus makes the performance of many classifiers optimal. From HRRP physical properties of view, PT amplifies the weaker echoes and compresses the stronger echoes so as to decrease the speckle effect in measuring the HRRPs similarity. The details about PT can be referred to [13].

One-against-all linear SVM classifiers are trained for the feature vectors extracted by SKO-KPCA, FKO-KPCA, and KPCA without the kernel optimization, respectively. The parameters in the experiments are listed in Table 2. The experimental results are shown in Figure 5, where the x-axis represents the number of principle components and the y-axis the recognition rate.

In Figures 5(a)-5(c), each target recognition rates in different KPCA methods are shown respectively. Figure 5(d) indicates the average recognition rates. From Figures 5(a) and 5(d), we can find that KPCA with l1-norm and fusion

Table 2: The parameters in the experiment.

Y Empirical centers no. Y1 Iteration no.

KPCA1 0.001

KPCA2 0.001

SKO-KPCA1 0.001 50 1 0.02 200

SKO-KPCA2 0.001 50 1 0.001 200

FKO-KPCA 0.001 50 1 0.0003 200

Note: KPCA1 and KPCA2 correspond to KPCA with li -norm and 2-norm Gaussian kernels; SKO-KPCA1 and SKO-KPCA2 correspond to KPCA with l1 -norm and 2-norm Gaussian kernels after single kernel optimization; FKO-KPCA represents KPCA after fusion kernel optimization based on l1-norm and 2-norm Gaussian kernels.

Gaussian kernels perform better than with l2-norm Gaussian kernel because of the different performances on An26. Due to the modulability of the propeller of An26 on the HRRPs, there still exists speckle effect even in a small angle sector. Therefore, l1-norm Gaussian kernel can be employed to eliminate the large fluctuation so as to improve the recognition performance. By the fusion of l1-norm Gaussian kernel, FKO-KPCA also works well. Meanwhile, Figure 5(d) shows that the recognition rate of l1-norm Gaussian kernel SKO-KPCA reaches 96.30% when the number of principle component equals 140, while FKO-KPCA method only needs 90 components to reach its best classification rate 96.27%. Since the fewer components mean lower computational complexity, FKO-KPCA can extract effective features to reduce the computational complexity in comparison KPCA with l1-norm Gaussian kernel. Why can FKO-KPCA outperform KPCA with l1-norm Gaussian kernel? In our opinion, the possible reason is that when restricting the speckle effect, l1-norm distance also augments the noise to interfere the signal, therefore, FKO-KPCA achieves better performance on Cessna and Yark42 than KPCA with l1 -norm Gaussian kernel, as shown in Figures 5(b) and 5(c), which suggests that our optimization method can adaptively combine the characteristics of two kinds of kernels. From Figure 5(d), we can also observe that SKO-KPCA cannot effectively optimize the

40 60 80 100 Number of PCs

-B- KPCA1 ■o- KPCA2 ■V" SKO-KPCA2

SKO-KPCA1 FKO-KPCA

20 40 60 80 100 Number of PCs

-B- KPCA1 -G- KPCA2 ■-V- SKO-KPCA2

SKO-KPCA1 FKO-KPCA

40 60 80 100 Number of PCs

-B- KPCA1 ■■©■ KPCA2 ■-V- SKO-KPCA2

SKO-KPCA1 FKO-KPCA

20 40 60 80 100 Number of PCs

-B- KPCA1 ■■©■ KPCA2 ■-V- SKO-KPCA2

SKO-KPCA1 FKO-KPCA

Figure 5: Recognition rates on the measured radar HRRP data versus number of principle components in three experiments. (a) An-26 (b) Cessna (c) Yark-42 (d) average recognition rates.

origin KPCA and l1-norm SKO-KPCA even decreased the recognition rates.

5. CONCLUSIONS

In this paper, a kernel optimization method with learning ability for radar HRRP recognition is proposed. The method can adaptively combine the different characteristics of h-norm and l2-norm Gaussian kernels, so that not only is the

kernel function optimized but also the speckle fluctuations of HRRP are restrained. Because of the use of kernel function adaptive to data, each element in kernel matrix corresponds to independent coefficient, which is the reason why it is called fusion kernel optimization method. The classification performance of features extracted by optimized KPCA are analyzed and compared via support vector machines (SVMs) based on benchmark and measured HRRP datasets, which demonstrates the efficiency of our method.

ACKNOWLEDGMENT

This work is supported by the National Science Foundation

of China (no.60302009).

REFERENCES

[1] B. Chen, H. Liu, and Z. Bao, "PCA and kernel PCA for radar high range resolution profiles recognition," in Proceedings of IEEE International Radar Conference, pp. 528-533, Arlington, Va, USA, May 2005.

[2] B. Chen, H. Liu, and Z. Bao, "An efficient kernel optimization method for high range resolution profile recognition," in Proceedings of IEEE International Radar Conference, pp. 14401443, Shanghai, China, October 2006.

[3] L. Du, H. Liu, Z. Bao, and M. Xing, "Radar HRRP target recognition based on higher order spectra," IEEE Transactions on Signal Processing, vol. 53, no. 7, pp. 2359-2368, 2005.

[4] L. Du, H. Liu, Z. Bao, and J. Zhang, "A two-distribution compounded statistical model for radar HRRP target recognition," IEEE Transactions on Signal Processing, vol. 54, no. 6, pp. 22262238, 2006.

[5] H. Xiong, M. N. S. Swamy, and M. O. Ahmad, "Optimizing the kernel in the empirical feature space," IEEE Transactions on Neural Networks, vol. 16, no. 2, pp. 460-474, 2005.

[6] S. Amari and S. Wu, "Improving support vector machine classifiers by modifying kernel functions," Neural Networks, vol. 12, no. 6, pp. 783-789, 1999.

[7] V. Vapnik, The Nature of Statistical Learning Theory, Springer, New York, NY, USA, 1995.

[8] Z. Bao, M. Xing, and T. Wang, Radar Imaging Technique, Publishing House of Electronics Industry, Beijing, China, 2005.

[9] B. Scholkopf, S. Mika, C. J. C. Burges, et al., "Input space versus feature space in kernel-based methods," IEEE Transactions on Neural Networks, vol. 10, no. 5, pp. 1000-1017, 1999.

[10] J. Shawe-Taylor and N. Cristianini, Kernel Methods for Pattern Analysis, Cambridge University Press, Cambridge, UK, 2004.

[11] B. D. Ripley, Pattern Recognition and Neural Networks, Cambridge University Press, Cambridge, UK, 1996.

[12] C. Blake, E. Keogh, and C. J. Merz, "UCI repository of machine learning databases," Tech. Rep., Department of Information and Computer Science, University of California, Irvine, Calif, USA, 1998. http://www.ics.uci.edu/~mlearn/ MLRepository.html.

[13] H. Liu and Z. Bao, "Radar HRR profiles recognition based on SVM with power-transformed-correlation kernel," in Proceedings of International Symposium on Neural Networks (ISNN '04), vol. 3173 of Lecture Notes in Computer Science, pp. 531536, Dalian, China, August 2004.

Hongwei Liu received his M.S. and Ph.D. degrees all in electronic engineering from Xidian University in 1995 and 1999, respectively. He joined the National Key Lab of Radar Signal Processing, Xidian University since 1999. From 2001 to 2002, he is a visiting scholar at the department of electrical and computer engineering, Duke University, USA. He is currently a Professor and Director of National Key Lab of Radar Signal Processing, Xidian University. His research interests are radar automatic target recognition, radar signal processing, and adaptive signal processing. He is with the Key Laboratory for Radar Signal Processing, Xidian University, Xi'an, China.

Zheng Bao graduated from the Communication Engineering Institution of China in 1953. Currently, he is a Professor at Xid-ian University and an Academician of the Chinese Academy of Science. He is the author or coauthor of six books and has published more than 300 papers. Now his research work focuses on the areas of spacetime adaptive processing, radar imaging, and radar automatic target recognition. He is with the Key Laboratory for Radar Signal Processing, Xidian University, Xi'an, China.

Bo Chen received his B.Eng. andM.Eng. degrees in electronic engineering from Xidian University in 2003 and 2006, respectively. He is currently a Ph.D. student in the National Key Lab of Radar Signal Processing, Xidian University. His research interests include radar signal processing, radar automatic target recognition, kernel machine.