Scholarly article on topic 'Direct Determination of Smoothing Parameter for Penalized Spline Regression'

Direct Determination of Smoothing Parameter for Penalized Spline Regression Academic research paper on "Mathematics"

CC BY
0
0
Share paper
OECD Field of science
Keywords
{""}

Academic research paper on topic "Direct Determination of Smoothing Parameter for Penalized Spline Regression"

Hindawi Publishing Corporation Journal of Probability and Statistics Volume 2014, Article ID 203469, 11 pages http://dx.doi.org/10.1155/2014/203469

Research Article

Direct Determination of Smoothing Parameter for Penalized Spline Regression

Takuma Yoshida

Graduate School of Science and Engineering, Kagoshima University, Kagoshima 890-8580, Japan Correspondence should be addressed to Takuma Yoshida; yoshida@sci.kagoshima-u.ac.jp Received 7 January 2014; Revised 31 March 2014; Accepted 31 March 2014; Published 22 April 2014 Academic Editor: Dejian Lai

Copyright © 2014 Takuma Yoshida. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Penalized spline estimator is one of the useful smoothing methods. To construct the estimator, having goodness of fit and smoothness, the smoothing parameter should be appropriately selected. The purpose of this paper is to select the smoothing parameter using the asymptotic property of the penalized splines. The new smoothing parameter selection method is established in the context of minimization asymptotic form of MISE of the penalized splines. The mathematical and the numerical properties of the proposed method are studied. First we organize the new method in univariate regression model. Next we extend to the additive models. A simulation study to confirm the efficiency of the proposed method is addressed.

1. Introduction

Penalized spline methods are a well-known efficient technique for nonparametric smoothing. Penalized splines were suggested by O'Sullivan [1] and Eilers and Marx [2]. In O'Sullivan [1], they used a cubic B-spline function and the penalty was the integrated squared second derivative of the B-spline function. On the other hand, Eilers and Marx [2] use a cubic B-spline function and a difference penalty on the spline coefficients. The Eilers and Marx's estimator is computationally efficient compared to smoothing splines and O'Sullivan's estimator since it removes the integration part of the penalty. Hence this paper focuses on the penalized spline estimator provided via Eilers and Marx [2]. The penalized spline method is efficient for both univariate regression and multiple regressions such as the additive model (see Marx and Eilers [3]). General properties usages and a description of the flexibility of penalized splines are described in Ruppert et al.

When using penalized splines, the determination of the smoothing parameter is very important since it controls the trade-off between the goodness of fit and the smoothness of the fitted curve. As the classical method for achieving this, the grid search method is often used. The grid search method is selected by minimizing one criterion from candidate points of

the smoothing parameter. Criteria for grid searches include cross-validation, generalized cross-validation, Mallow's C„, and so forth. Although the grid search selection generally finds one optimal smoothing parameter, it is possible that the worth curve is obtained when not all the candidates are good. This tendency is especially striking in additive models since the number of the smoothing parameter is the same as that of the covariates. Several smoothing parameter selection methods using the grid search criteria have been developed by many authors such as Krivobokova [5], Reiss and Ogden [6], Wood [7], Wood [8], and Wood [9]. On the other hand, the mixed model representation of the spline smoothing has also been studied (see Lin and Zhang [10], Wand [11], and Ruppert et al. [4]). In mixed models, the grid search method is not necessary to obtain the final fit curve. The smoothing parameter in the mixed model can be written as the ratio of the variance of the random coefficient and the error. By estimating these unknown variances using a maximum likelihood method or a restricted maximum likelihood method (REML), the final fitted curves are obtained, yielding the estimated best linear unbiased predictor (EBLUP). Therefore the EBLUP does not require a grid search. However the fitted curve tends to theoretically oversmooth and the numerical stability is not guaranteed if a cubic spline is used (see Section 3). The Bayesian approach

to select the smoothing parameter has been studied by Fahrmeir et al. [12], Fahrmeir and Kneib [13], and Heinzl et al. [14]. Kauermann [15] compared some smoothing parameter selection methods.

In this paper, we propose a new method to determining the smoothing parameter using the asymptotic properties of the penalized splines. For the remainder of this paper, our new method will be known as the direct method. Before describing the outline of the direct method, we will briefly introduce the asymptotic studies of penalized splines. First, Hall and Opsomer [16] showed the consistency of the penalized spline estimator in white noise representation. Subsequently, Li and Ruppert [17], Claeskens et al. [18], Kauermann et al, [19], and Wang et al. [20]havedeveloped the asymptotics for the penalized spline estimator in univariate regression. Yoshida and Naito [21] and Yoshida and Naito [22] have studied the asymptotics for penalized splines in additive regression models and generalized additive models, respectively. Xiao et al. [23] suggested a new penalized spline estimator, and developed its asymptotic properties in bivari-ate regression. Thus, the developments of the asymptotic theories of the penalized splines are relatively recent events. In addition, the smoothing parameter selection methods using asymptotic properties have not yet been studied. This motivates us to try to establish such methods.

The direct method is conducted by minimizing the mean integrated squared error (MISE) of the penalized spline estimator. In general, the MISE of the nonparametric estimator is divided into the integrated squared bias and the integrated variance of the estimator. Of course the penalized spline estimator is no exception and hence the direct method is stated by utilizing the expression of the asymptotic bias and variance of the penalized spline estimator, which have been derived by Claeskens et al. [18], Kauermann et al. [19], and Yoshida and Naito [22]. From their result, we see that the asymptotic order of the variance of the penalized spline estimator is only dependent on the sample size and the number of knots but not the smoothing parameter. However the second term of the asymptotic variance of the penalized spline estimator contains the smoothing parameter and we can see that the variance becomes small when the smoothing parameter increases. On the other hand, the squared bias of the penalized spline estimator increases if the smoothing parameter is reduced. Therefore the minimizer of the MISE of the penalized spline estimator can be seen as one of the optimal smoothing parameters. Since the MISE is asymptotically convex with respect to the smoothing parameter, the global minimumofthe MISE canbefound. This detection has been sufficiently developed for bandwidth selection in kernel regression (see Ruppert et al. [24], Wand and Jones [25], etc.). First the present paper focuses on univariate regression, and we next extend the direct method to the additive models. In both models, the mathematical and the numerical properties of the direct method are studied. In additive models, we need to select a smoothing parameter of the same number as the explanatory variable, such that the computational cost of the grid search becomes large. We expect that the computational cost of the direct method is dramatically reduced compared to the grid search.

The structure of this paper is as follows. In Section 2, we introduce a penalized spline estimator in a univariate regression model. Section 3 provides the direct method and related properties. Section 4 extends the direct method to the additive model. In Section 5, we confirm the performance of the direct method using a numerical study. We provide a discussion on the outlook and further studies in Section 6. The proofs of our theorems are provided in the appendix.

2. Penalized Spline Estimator

Consider the regression problem with n observations,

Yt = f(xt) + et, i=l,...,n, (1)

where Yt is the response variable, xt is the explanatory variable, f is the true regression function, and et is the random error which is assumed to be independently distributed with mean 0 and variance a . Throughout the paper we assume the explanatory variable xt e [a, b] (i = 1,..., n) is not a random variable from which the expectation of Yt can be expressed as E[Yt | x] = f(xt). The support of the explanatory xt can be relaxed as the real space R. In order to simplify the way of locating the knots in the following, the support of the explanatory is assumed to be with compact space. We aim to estimate f via a nonparametric penalized spline method. We consider the knots a = k0 < k1 < ••• < kk < kk+1 = b and, for k = -p, ...,K- 1, let (x) be the pth degree B-spline basis function defined as

<] w =

1 Kk <X<Kk+i,

0, otherwise,

b[p] (x) =

X-Kk Blp-i]

Kk+p Kk

Kk+p+1

Kk+p+1 Kk+1

<-1] M

associated with the above knots and the additional knots K-P = K-p+i = ■■■ = Ko and Kk+1 = ■■■ = *K+p+i. The B-spline function BkP (x) is a piecewise _pth degree polynomial on an interval [Kk,Kk+p+1]. The details of the B-spline basis functions are described in de Boor [26]. For simplicity, we write Bk(x) = Bkp\x) since we do not specify the p in the following sentence.

We use the linear combination of [Bk(x) | k = -p,..., K-1} and the unknown parameters bk (k = -p,...,K - 1) to approximate the regression function and consider the B-spline regression problem,

Y{ = S (Xj) + £j, i = 1,... ,n,

s(x)= YBk (x)h.

The purpose is to estimate the parameter b =

(b-p ■■ ■ bK-1) included in s(x) instead of f directly.

The penalized spline estimator b = (b-p ■■■ bK-1) of b is defined as the minimizer of

Yt -I Bk(x)bk k=-p

+ A I [Am (bk)}2, (5)

where A is the smoothing parameter and A is the backward difference operator defined as Abk = bk - bk-l and

Am (bk) = Am-1 (Abk) = I(-Dm-imCjbk-m+r

Let Dm = (d{im>)ij be a (K + p - m) x (K + p) matrix, where

= (-l)1 ' iimC\i-ii for i < j < m+1 and0 otherwise. Using the notation Y = ^ ■ ■ ■ Yn)T and Z = (B-p+j-1(xi)).., (5) can then be expressed as

(Y -Zbf (Y -Zb) + XbTDTmDmb. (7)

The minimum of (7) is obtained when

b = (ZTZ + ADTmDm)-1ZTY.

The penalized spline estimator f(x) of f(x) for x e [a, b] is defined as

f(x)= iBk (x)bk = B(x)Tb, k=-p

where B(x)=jB_p(x) ■■■ BK_1 (x)) .

If A = 0, f(x) is reduced to the regression spline estimator, which is the spline estimator obtained via the least squares method. The regression spline estimator will lead to an oscillatory fit if the number of knots K is large. However the determination of K and the location of knots are very difficult problems. The advantage of the penalized spline smoothing is that the good smoothing parameter brings the estimator to the curve with the fitness and smoothness simultaneously without choosing the number and location of knots precisely. In the present paper, we use equidistant knots Kk = a + k/K and focus on the determination of the smoothing parameter. As the location of knots other than above, the quantiles of the data points [x1t..xn} are often used (see Ruppert [27]). However it is known that the penalized spline estimator is hardly affected by the location of knots if K is not too small. Therefore we do not discuss the location of knots. We suggest the direct method for this in the next section.

3. Direct Determination of Smoothing Parameter

In this section, we provide the direct method for determining the smoothing parameter without a grid search. This direct method is given theoretical justification by asymptotic theory of the penalized spline estimator. To investigate the asymptotic property of the penalized spline estimator, we assume that f e Cj+\ K = o(n1/2), and A = 0(n/K1-m).

For convenience we first give some notation. Let Gn = n-1ZTZ and An = Gn + (A/n)DrmDm. Let b* be a best Lm approximation to the true function f. This means that b* satisfies

sup \f (x) + K-(p+1)ba (x) - B(x)Tb*

= o(k-(p+1) ), as K —> (Xi,

bh (x) = -J^h (Kk-1 <x<Kk) BV1 ) '

I (a < x < b) is the indicator function of an interval (a,b), and Brp(x) is the _pth Bernoulli polynomial (see Zhou et al. [28]). It can be easily shown that ba(x) = 0(1) as K ^ >x>. The penalized spline estimator can be written as

f(x) = B(x)T(ZTZ) 1ZTY

-~n B(x)TA-n1DmmDmGn1(Z)TY. n \ n /

The first term of the right hand side of (12) is equal to the regression spline estimator denoted by frs(x). The asymp-totics for the regression spline estimator has been developed by Zhou et al. [28] and can be expressed as

E [f.s (x)]=f (x) + K-(P+1X (x) + o (k-(p+1)) ,

V [frs (x)] = ^B(x)TG-1 B (x){1 + 0(1)} = o(K)-

From Theorem 2(a) of Claeskens et al. [18], we have

E [f (x)] - E [frS (x)] = --B(x)TA-1DTmDmb

a k1- m a k1- m + o( - ) =0'

AKC(x\n,K,A) AK2

V[f(x)] = V [f„(x)]-—-k-+

C(x\n, K, A) = — B(x)TA-n1DTmDmGn1B (x) (15) n

is the covariance of frs(x) and the second term of the right hand side of (12). The variance of the second term of (12) can be shown to be negligible (see the appendix). The following theorem leads to A controlling the trade-off between the squared bias and variance of the penalized spline estimator.

Theorem 1. The covariance C(x \ n,K,A)/K in (15) is positive. Furthermore, as n ^ <x, C(x \ n,K,A) = C(x \ n,K,0)(1 + o(1)) andC(x \ n,K,0)/K = 0(K/n).

From the asymptotic form of E[f(x)] and V[f(z)] and Theorem 1, we see that, for small A, the bias of f(x) is small and the variance becomes large. On the other hand, the large A indicates the bias of f(x) increases and the variance decreases. From Theorem 1, the MISE of f(x) can be expressed as

\ E[[f(x)-f(x)}2]dx = £ {k-(p+1\ (x) - AB(x)TA-n1DTmDmb* }dx

XKjaC(x\n,K,0)dx

+ — \ B(x)TGnB (x)dx--

n Ja n

+ r1n (K)+r2n (K, A) ,

where r1n(K) and r2n(K, A) are of negligible order, respectively, compared to the regression spline and penalized spline of the second term of the right hand side of (12). Actually we have r1n(K) = o(K/n) and r2n(K, A) = o(A2K2(1-m)/n2). The MISE of f(x) is asymptotically quadratic and a global minimum exists. Let

MISE (A) = A2 \ [B(x)TA-n1DTmDmb*}2dx n a

A (b \2ba(x) B(x)TA-n1DTmDmb

+ C(x\n,K, 0)

And let Aopt be the minimizer ofMISE( A). We suggest the use of Aopt as the optimal smoothing parameter,

n fbD2n (x\f(p+1), b*)dx

A^ = nia V -—, (18)

opt 2 (b

I Dm (x \ b* )dx

Dm (x\b').[B(»)TG;1DmDmb'} ,

D2n(*\f<"1\b*) w

+ C(x\n,K,0).

However it is easy to see that MISE(A) and Aopt contain an unknown function and parameters, and hence these must be estimated. We construct the estimator of Aopt by using the

consistent estimator of f^1 and b*. We can use the penalized spline estimator and its derivative as the pilot estimator of f(p+1) andb*.Ifwe then use another smoothing parameter A0, it should be chosen appropriately. Therefore we use the regression spline estimator as the pilot estimator of f(p+1\x)

and b*. First we establish b = (ZTZ)-1ZTY. Next we construct the pilot estimator with f^1 (x) by using the (p + 2)th

degree B-spline basis. LetBlp](x) = (b-^x) ■■■ Blp\(x)) . Using the fundamental property of the B-spline function, fm)(x) can be written as f(m\x) = Blp-m](x)TDmb* asymptotically. Hence the regression spline estimator f(-p+11) can be constructed as f(p+1\x) = Blp+1](x)TDp+1 b(p+2), where b{p+2) = ((zlP+2])TZlP+2])-1(ZlP+2])TY and ZlP+2] = (B-p+_2^j_1(xi))ij. Since the regression spline estimator tends to be oscillatory with a higher order pth spline function, the fewer knots are used to construct f(-p+11l(x). The variance a2 included in C( x \ n, K, 0) is estimated via

n-(K + P) =1

l[Yt - B(xfb}2.

Using the above pilot estimators,

1 l!j=1 D2 (zj\f(p+1\ b)

An„t = -

°pt" 2 IUD1 (zj \ b) '

with some finite grid points [Zj][ on [a, b].

Consequently the final penalized spline estimator is defined as

f(x)= B(x)Tbopt, (22)

bopt = (ZTZ + A optDTmDm)'1ZTY. (23)

It is known that the optimal order of K of the penalized spline estimator is the same as that of the regression spline estimator, K = 0(n1/(2p+3)) (see Kauermann et al. [19] and Zhou et al. [28]). Using this, we showthe asymptotic property of Aopt in following theorem.

Theorem 2. Let f e Cp+1. Suppose that K = o(n1/2) and A = o(n/K1-m). Then A0pt given in (21) exists, and A0pt = 0(nKm-p-2 + K2m) asn ^ ^.Furthermore K = 0(nl'(2p+3)) leads to the optimal order

Aopt = O (n^W^) + O (n2m/(2p+3)) (24) and the rate of convergence of MISE of f(x) becomes

0(n-2(p+1)/(2p+3)).

The proof of Theorem 2 is given in the appendix. At the end of this section, we give a few remarks.

Remark 3. The asymptotic order of the squared bias and variance of the penalized splines are 0(K-2(p+1) + 0(a2k2(1-T)/n2) and 0(K/n), respectively. Therefore under K = (n1/(2p+3)) and A = 0(n(p+m+1)/(2p+3)), the optimal rate of convergence of MISE of the penalized splines is 0(n-2(p+1^(2p+3^). From Theorem 2, we see that the asymptotic order of Aopt yields the optimal rate of convergence.

Remark 4. O'Sullivan [1] us as the penalty

term, where y is the smoothing parameter. When equidistant knots are used, the penalty ^[s^ (x)}2dx can be expressed as K2mbTDTmRDmb, where R "= (£ B[tp-m] (x)Blf-m] (x))tj. The penalty bTDm,Dmb provided by Eilers and Marx [2] can be seen as the simple version of K2mbTD1mRDmb by replacing R with K-1I and A = yK2m-1, where I is the identity matrix. So the order m of the difference matrix Dm controls the smoothness of the pth B-spline function s(x), and hence m should be set such that m < p to give atheoretical justification although the penalized spline estimator can be also calculated for m > p. Actually, (p, m) = (3,2) is often used by many authors.

Remark 5. The penalized spline regression is often considered as the mixed model representation (see Ruppert et al. [4]). In this frame work, we use the pth truncated spline model

We assume that xj is located on an interval [aj,bj] and f1,---,fd are normalized as

fj (u)du = 0, j = l,...,d, (27)

to ensure the identifiability of fj. Then the intercept y is typically estimated via

1 n nf^ '

Hence we replace Yt with Yt - y in (26) and set y = 0, redefining the additive model as

Y, =fi (x,i) + --- + fd (xld) + £,, i=l,...,n, (29)

s(x) = po + fiix + --- + fipxP + Xuk(x - Kk)p (25)

where each Yt is centered. We aim to estimate fj via a penalized spline method. Let

where (x - a)+ = max[x - a, 0} and the p's are unknown parameters. Each uk is independently distributed as uk ~ N(0, au), where au is an unknown variance parameter of uk. The penalized spline fit of f(x) is defined as the estimated BLUP (see Robinson [29]). The smoothing parameter in the ordinal spline regression model corresponds to a2¡o2u in the spline mixed model. Since the a2 and a2u are estimated using the ML or REML method, we do not need to choose the smoothing parameter via a grid search. It is known that the estimated BLUP fit is linked to the penalized spline estimator (9) with m = p + 1 (see Kauermann et al. [19]). Hence the estimated BLUP tends to have a theoretically underfit (see Remark 4).

Remark 6. From Lyapunov's theorem, the asymptotic normality of the penalized spline estimator f(x) with A t can be derived under the same assumption as Theorem 2 and some additional mild conditions. Although the proof is omitted, it is straightforward since A t is the consistent estimator of Aopt and the asymptotic order of Aopt satisfies Lyapunov's condition.

4. Extension to Additive Models

We extend the direct method to the regression model with multidimensional explanatory variables. In particular, we consider additive models in this section. For the dataset [(Yt, Xj) : i = 1,...,n} with 1-dimensional response Y; and d-dimensional explanatory x; = (xa,..., xid), the additive models are connected via unknown regression functions fj (j = l,...,d) and the mean parameter y such that

Y = y + fi (xn) + --' + fd (xid) + ^i■

sj (x)= TBjk (x)bjk' j=1>...' k=-p

be the B-spline model, where Bjk(x) is the pth B-spline basis function with knots [kj^ I k = -p, ■■ .,Kj + p + 1, Kj0 = aj, KjK+1 = bj, j = 1,.. .,d} and bjks are unknown parameters. We consider the B-spline additive models

Yi = s1 (xi1) + --- + sd (xid)+£i' i=h...,n, (31)

and estimate sj and bjk. For j = 1,...,d, the penalized spline

estimator bj = (b is defined as

bj,K-1) of bj = {bj,

vjK-1 J

= argmin I bi,-,bd i=1

Yi -1 I Bjk (xt)bjk j=1 k=-p

where A j are the smoothing parameters and Dj m is the mth order difference matrix with size (Kj + p - m) x (Kj + p) for

j = 1,...,d. By using bj, the penalized spline estimator of fj(xj) is defined as

fj (xj)= 1 Bj,k (xj)bj,k' j=1'...,<

The asymptotics for f j(xj) have been studied by Yoshida and Naito [22] who derived the asymptotic bias and variance to be

E[f, (*i)]=fj (*) ) + (Xj) (i + o(i))

U j ja\ j) Xi / NT , T ( AjK]

j \ 1 \-1r^T n U* , „ / j j

+tB j(Xj) ^jtfmWj +

y[fj (*)] = *- Bj(*j)V;Bj (Xj)

XKo2Cj (Xj \n,Kj) (XjK)

where B/x) = (B_p(x) ••• BKj_1(x)) ,Gjn = nlZ]Zjy

Ajn = Gjn + (Xj/n)DтmDm, Zj = tf-p+k-^Xj»^ and b* is the best LTO approximation of fj,

f(/+1) (x) V , ,

* (X) = - (p+l)l ^1 (%j'k - X < Kjk+1)

Cj (xj \ n'Kj) = lBj(xj)rG?nD>jmG-!Bj (Xj).

The above asymptotic bias and variance of fj(Xj) are similar to that of the penalized spline estimator in univariate regression with {(Y^x^) : i = 1,...,n}. Furthermore

the asymptotic normality of [f1(x1) ••• fd(xd)] has been shown by Yoshida and Naito [22]. From their paper, we find that fi(xi) and fj (Xj) are then asymptotically independent for i = j. This indicates some theoretical justification to select Xj

in minimizing the MISE of fj. Similar to the discussion in

Section 3, the minimizer Ajopt of the MISE of fj(Xj) can be obtained for j = 1,...,d,

j,opt =2 i>,n (X\b**)dx

^xlV^lB^G-WmDjmV*},

Ljn (x\fr> b**y)==2b-^BJ(xJ)

+ a2Cj (x \ n,Kj).

Since fj, b*, and a2 are unknown, they should be estimated. The pilot estimators of fj, b*, and a2 are constructed based on the regression spline method. By using the pilot estimators fj, bj of fj, b* and the estimator of a2, we construct the estimator of A:

j j lj>°pt:

j>°pt '

n L^r=1 jn 2~

Ln (Zr

rp+1\ bj,a2)

XR=1 Hjn (zr \ bj)

where {zr}R is some finite grid point sequence on [aj, bj]. We obtain for j = 1,...,d, the penalized spline estimator fj(xj) of fj(Xj),

fj (xj) = B^jfh

/.opt'

where bj>opt is the penalized spline estimator of bj using

Aj,opt. From Theorem 2 and the proof of Theorem 3.4 of Yoshida and Naito [22], the asymptotic normality of

[MX1) ••• fd(Xd)] using (^.opt-.-^opt) canbeshown.

Remark 7. Since the true regression functions are normalized, the estimator fj should be also centered as

fb (^-hh (*n).

Remark 8. The penalizedspline estimator of b1,..., bd can be obtained using a backfitting algorithm (Hastie and Tibshirani [30]). The backfitting algorithm for the penalized splines in additive regression is detailed in Marx and Eilers [3] and Yoshida and Naito [21].

Remark 9. Although we focus on the nonparametric additive regression in this paper, the direct method can be also applied to the generalized additive models. However we omit this discussion because the procedure is similar to the case of the additive models discussed in this section.

Remark 10. The direct method is quite computationally efficient when compared to the grid search method in additive models. In grid searches, we prepare the candidate of Aj. Let Mj be the set of all possible candidate grid value of Xj for j = 1,...,d. Then we need to compute the backfitting algorithm {M1 x • •• x Md} times. On the other hand, it is sufficient to perform the backfitting algorithm for only two steps for the pilot estimator and the final penalized spline estimator. Thus, compared with the conventional grid search method, the direct method can drastically reduce computation time.

5. Numerical Study

In this section, we investigate the finite sample performance of the proposed direct method in a Monte Carlo simulation. Let us first consider the univariate regression model (1) for the data {(Yitxt) : i = 1,...,n}. Then we use the three

Table 1: Results of sample MISE for n =

50 and n = 200. All entries for MISE are 102 times their actual values.

True F1 F2 F3

Sample size n = 50 n = 200 n = 50 n = 200 n= 50 n = 200

L-Direct 0.526 0.322 3.684 1.070 1.160 0.377

L-GCV 0.726 0.621 5.811 1.082 1.183 0.379

L-REML 2.270 1.544 9.966 3.520 1.283 0.873

Local linear 0.751 0.220 4.274 0.901 1.689 0.503

C-Direct 0.401 0.148 3.027 0.656 1.044 0.326

C-GCV 0.514 0.137 3.526 0.666 1.043 0.290

C-REML 1.326 0.732 8.246 4.213 3.241 0.835

Table 2: Results of sample MSE of smoothing parameter obtained by direct method, GCV, and REML for n = MSE are 102 times their actual values. 50 and n = 200. All entries for

True F1 F2 F3

Sample size n = 50 n = 200 n = 50 n = 200 n = 50 n = 200

L-Direct 0.331 0.193 0.113 0.043 0.025 0.037

L-GCV 0.842 0.342 0.445 0.101 0.072 0.043

L-REML 1.070 1.231 0.842 0.437 0.143 0.268

C-Direct 0.262 0.014 0.082 0.045 0.053 0.014

C-GCV 0.452 0.123 0.252 0.092 0.085 0.135

C-REML 0.855 0.224 0.426 0.152 0.093 0.463

types of true regression function f(x) = cos(n(x - 0.3)), f(x) = $((x - 0.8)/0.05)/^005 - $((x - 0.2)/0.04)/^0M, and f(x) = 2x3 + 3 sin(2n(x - 0.8)3) + 3 exp [-(% - 0.5)2/0.1], which are labeled by F1, F2, and F3, respectively. Here <p(x) is the density function of the normal distribution. The explanatory xt and error e; are independently generated from uniform distribution on [0,1] and N(0,0.52), respectively. We estimate each true regression function via the penalized spline method. We then use the linear and cubic B-spline bases with equidistant knots and the second order difference penalty. In addition we set K = 5n2/5 equidistant knots and the smoothing parameter is determined by the direct method. The penalized spline estimator with the linear spline and cubic spline are denoted as L-Direct and C-Direct, respectively. For comparison with L-Direct, the same studies are also implemented for the penalized spline estimator with a linear spline and the smoothing parameter selected via GCV and restricted maximum likelihood method (REML) in mixed model representation, and the local polynomial estimator with normal kernel and Plug-in bandwidth (see Ruppert et al. [24]). In GCV, we set the candidate values of A as {i/10 : i = 0,1,..., 99}. The above three estimators are denoted by L-GCV, L-REML, and local linear, respectively. Furthermore we compare C-Direct with C-GCV and C-REML, which are the penalized spline estimator with the cubic spline and the smoothing parameter determined by GCV and REML. Let

sMISE =

J j= 1

jlifr (*j)-f(*j)}

be the sample MISE of any estimator f(x) of f(x), where fr(z) is f(z) with rth replication and Zj = j/J. We calculate the sample MISE of the penalized spline estimator with the direct method, GCV, and REML and the local linear estimator. In this simulation, we use J = 100 and R = 1000. We have simulated n = 50 and 200.

The sMISE of all estimators for each model and n are given in Table 1. The penalized spline estimator using the direct method shows good performance in each setting. In comparison with other smoothing parameter methods, the direct method is a little better than the GCV as a whole. However for n = 200, C-GCV is better than C-Direct in F1 and F3 though its difference is very small. We see that the sMISE of C-Direct is smaller than local linear, whereas local linear behaves better than L-Direct in some case. In F2, C-Direct is the smallest of all estimators for n = 50 and 200. Although the performance totally seems to depend on situation in which data sets are generated, we believe that the proposed method is one of the efficient methods.

Next the difference between Aopt and Aopt is investigated empirically. Let Aopt r be the Aopt with rth replications for r = 1,..., 1000. Then we calculate the sample MSE of Aopt:

sMSE =-I|Aopt>r -A opt}

for F1, F2, and F3 and n = 50 and 200. To construct Aopt for F1, F2, and F3, we use true /(4) (x) and o2 and an approximate b*. Here the approximate b* means the sample average of b with n = 200 and 1000 replications.

In Table 2, the sMSE of Aopt for each true function are described. For comparison, the sMSE of the smoothing

Table 3: Results of sample PSE for n= 50 and n = 200. All entries for PSE are 102 times their actual values.

True F1 F2 F3

Sample size n = 50 n = 200 n = 50 n = 200 n= 50 n = 200

L-Direct 0.424 0.052 0.341 0.070 0.360 0.117

L-GCV 0.331 0.084 0.381 0.089 0.333 0.139

L-REML 0.642 0.124 0.831 1.002 0.733 0.173

Local linear 0.751 0.220 0.474 0.901 0.489 0.143

C-Direct 0.341 0.48 0.237 0.63 0.424 0.086

C-GCV 0.314 0.43 0.326 0.76 0.533 0.089

C-REML 0.863 1.672 0.456 1.21 1.043 0.125

parameter obtained via GCV and REML are calculated. The sMSE of the L-Direct and the C-Direct is small even in n = 50 for all true functions. Therefore it seems that the accuracy of A t is guaranteed. It indicates that the pilot estimator constructed via least squares method is not bad. The sMSE with the direct method are smaller than that with GCV and REML. This result is not surprising since GCV and REML are not concerned with Aopt. However together with Table 1, it seems that the sample MSE of the smoothing parameter is reflected in the sample MISE of the estimator.

The proposed direct method was derived based on the MISE. On the other hand the GCV and the REML are obtained in context of minimizing prediction squared error (PSE) and prediction error. Hence we compare the sample PSE of the penalized spline estimator with the direct method, GCV and REML, and the local linear estimator. Since the prediction error is almost the same as MISE (see Section 4 of Ruppert et al. [4]), we omit the comparison of the prediction error. Let

sPSE = -V nf-1

1 R - 2 KV{yir -fr (Xi )}

be the sample PSE for any estimator f(x), where yir (r = 1,...,R) is independently generated from Yj -N(f(x,), (0.5)2) for all i.

In Table 3, the modified sPSE, |sPSE - (0.5)2|, of all estimators for each model and n are described. In Remark 11, we discuss the modified sPSE. From the result, we can confirm that the direct method has good predictability. GCV can be regarded as the estimator of sPSE. Therefore in some case, sPSE with GCV is smaller than that with the direct method. It seems that the cause is the accuracy of the estimates of variance (see Remark 11). However its difference is very small.

We admit that the computational time (in second) taken to obtain f(x) for F1, p = 3, and n = 200. The fits with the direct method, GCV and REML took 0.04,1.22, and 0.34. Although the difference is small, the computational time of the direct method was faster than that of GCV and REML.

Next we confirm the behavior of the penalized spline estimator with the direct method in the additive model. For the data {{Yi,xi1,xi2,xi3) : i = 1,...,n}, we assume the additive model with true functions fi(x1) = sin(2nx1),

f2(x2) = $((x2 - 0.5)/0.2), and f3(x3) = O.40((x3 -0.1)/0.2) + O.60((x2 - 0.8)/0.2). The error is similar to the

first simulation. The design points (xi1,

xi2, xi3) are generated

(i + p+p2) 0 0

(l+2p)~ 0

(l + p + p2)-

1 P P2] ¿¡1

X P 1 P Zi2

.P2 P 1 _ .Zi3.

where z^ (i = 1,...,n, j = 1,2,3) are generated independently from the uniform distribution on [0,1]. In this simulation, we adopt p = 0 and p = 0.5. We then corrected to satisfy

f: (z)dz = 0, j = 1,2,3

in each p. We construct the penalized spline estimator via the backfitting algorithm with the cubic spline and second order difference penalty. The number of equidistant knots is Kj = 5n2/5 and the smoothing parameters are determined using the direct method. The pilot estimator to construct AJ>opt is the regression spline estimator with fifth spline and

Kj = n2/5. We calculate the sample MISE of each fj for n = 200 and 1000 Monte Carlo iterations. Then we calculate the sample MISE of f1, f2, and f3. In order to compare with the direct method, we also conduct the same simulation with GCV and REML. In GCV, we set the candidate values of Aj as {i/10 : i = 0,1,..., 9} for j = 1,2,3. Table 4 summarizes the sample MISE of fj (j = 1,2, 3) denoted by MISEj for direct method, GCV, and REML. The penalized spline estimator with the direct method performs like that with the GCV in both the uncorrelated and correlated design cases. For p = 0, the behaviors of MISE1, MISE2, and MISE3 with the direct method are similar. On the other hand, for GCV, the MISE1 is slightly larger than MISE2 and MISE3. The direct method leads to an efficient estimate of all covariates. On the whole, the direct method is better than REML. From above, we believe that the direct method is preferable in practice.

Table 4: Results of sample MISE of the penalized spline estimator. For each direct method, GCV and p, MISE1, MISE2, and MISE3 are the sample MISE of /¡, f2, and f3, respectively. All entries for MISE are 102 times their actual values.

Method ^ = ° ^ =

_MISE1_MISE2_MISE3_MISE1_MISE2_MISE3

Direct 0.281 0.209 0.205 0.916 0.795 0.972

GCV 0.687 0.237 0.390 0.929 0.857 1.112

REML 1.204 0.825 0.621 2.104 1.832 2.263

Table 5: Results of sample MSE of the selected smoothing parameter via direct method, GCV, and REML for p = 0 and p = 0.5. All entries for MSE are 102 times their actual values.

p = 0 p = 0.5

True fi Î2 /3 /1 /2 /3

Direct 0.124 0.042 0.082 0.312 0.105 0.428

GCV 1.315 0.485 0.213 0.547 0.352 0.741

REML 2.531 1.237 2.656 1.043 0.846 1.588

Table 6: Results of sample PSE of the penalized spline estimator. All entries for PSE are 102 times their actual values.

n = 200 P = 0 p = 0.5

Direct 0.281 0.429

GCV 0.387 0.467

REML 1.204 0.925

Remark 11. We calculate |sPSE - (0.5)21 as the criteria to confirm the prediction squared error. The ordinal PSE is defined as

PSE =114^ -/>>)|2i " ;=1

= * + il* [{/(*) -/(*)}']:

To confirm the consistency of the direct method, the sample MSE of A¿ t is calculated the same manner as that given in univariate regression case. For comparison, the sample MSE of GCV and REML are obtained. Here the true Ajopt (j = 1,2,3) is defined in the same way as that for univariate case. In Table 5, the results for each f1, f2, and f3, and p = 0 and 0.5 are shown. We see from the results that the behavior of the direct method is good. The sample MSE of the direct method is smaller than that of GCV and REML for all j. For the random design p = 0.5, such tendency can be also seen.

Table 6 shows the sPSE of f(xt) = f1(xi1) + f2(xi2) + f3 (xi3) with each smoothing parameter selection method. We see from result that the efficiency of the direct method is guaranteed in the context of prediction accuracy.

Finally we show the computational time (in second) required to construct the penalized spline estimator with each method. The computational times with the direct method, GCV and REML are 11.87 s, 126.43 s, and 43.5 s, respectively. We see that the direct method is more efficient than other methods in context of the computation (see Remark 11). All of computations in the simulation were done using a software R and a computer with 3.40 GHz CPU and 24.0 GB memory. Though this is only few examples, the direct method can be seen as one of the good methods to select the smoothing parameter.

where Yt's are the test data. The second term of the PSE is similar to MISE. So it can be said that the sample PSE evaluates the accuracy of the variance part and MISE part. To see the detail of the difference of the sample PSE between the direct method and other method, we calculated |sPSE - <r2| in this section.

6. Discussion

In this paper, we proposed a new direct method for determining the smoothing parameter for a penalized spline estimator in a regression problem. The direct method is based on minimizing MISE of the penalized spline estimator. We studied the asymptotic property of the direct method. The asymptotic normality of the penalized spline estimator using Aopt is theoretically guaranteed when the consistent estimator

is used as the pilot estimator to obtain Aopt. In numerical study, for the additive model, the computational cost of this direct method is dramatically reduced when compared to grid search methods such as GCV. Furthermore we find that the performance ofthe direct method is better than or at least similar to that of other methods.

The direct method will be developed for other regression models such as varying-coefficient models, Cox proportional hazards models, single-index models, and others if the asymptotic bias and variance of the penalized spline

estimator are derived. It is not limited to the mean regression; it can be applied to quantile regression. Actually, Yoshida [31] has presented the asymptotic bias and variance of the penalized spline estimator in univariate quantile regressions. Furthermore, it is seen that improving the direct method is important for various situations and datasets. In particular, the development of the determination of locally adaptive A is an interesting avenue of further research.

Appendix

We describe the technical details. For the matrix A = (a¡j)^, ||A||TO = max^||a^|}. First, to prove Theorems 1 and 2, we introduce the fundamental property of penalized splines in the following lemma.

Lemma A.1. Let A = (a¡j)^ be (K + p) matrix. Suppose that K = o(n112), A = o(nK1'm), and ||A||m = 0(1). Then, HAG- ||m = O(K) and HAK- ||m = O(K).

The proof of Lemma A.1 is shown in Zhou et al. [28] and Claeskens et al. [18]. The repeated use of Lemma A.1 yields that the asymptotic order of the variance of the second term of (12) is 0(A2(K/n)3). Actually, the asymptotic order of the variance of the second term of (12) can be calculated as

/ 2 n2

(-) - B(x)TA-n1Dm„DmGn1DrmDmA-n1B (x) n n

= 0(^K3 n3

When K = o(n1/2) and A = 0(n/K1-m), A2(K/nf = o(A(K/n)2) holds.

Proof of Theorem 1. We write

C(x I n, K, A) = — B(x)TGn1DTmDmGn1B (x)

+ V B(xf{A-: -G-J ¡dTDtG-J B (x).

n (A.2)

A- - G-1 =An1 {I-A -G-1} = -A^DTDTG-^

we have HA- - G-1 ||TO = 0(AK2/n) and hence the second term of right hand side of (A.2) can be shown 0((K/n)2(AK/n)). From Lemma A.1, it is easy to derive that C(x I n,K,0) = 0(K(K/n)). Consequentlywe have

C(xIn,K,A) = C(xIn,K,0) „f(K\2 ( As

= C(xIn,K, 0)

and this leads to Theorem 1.

+ o\-n

Proof of Theorem 2. It is sufficient to prove that fa D1n (x b*)dx = 0(K2(1-m)) and

f D2n (x I f(p+1\ b*)dx = 0 (Kn(p+m)) + 0(n)

If {B(x)TG-n1DlDmb*}2dx

< sup [{B^fG-jDlDmb^j^-b),

x€[a,b]

we have

\bDm (xI b*)dx = 0(K2(1nm)) (A.7)

by using Lemma A.1 and the fact that ||DmbJTO = 0(K-m) (see Yoshida [31]).

From the property of B-spline function, for (K+p) square matrix A, there exists C > 0 such that

b b I B(x)TAB (x) dx < ||A||m I B(x)TB (x) dx

< ||A||mC.

In other words, the asymptotic order of ^ B(x)TAB(x)dx and that of ||A||m are the same. Let An = G^dTd^-1 . Then, since ||An||TO = K2,we obtain

| C(xIn,K,0)dx = 0(K-). (A.9)

Furthermore because supa6[ab]|ba(x)} = 0(1), we have

Kn(p+1) \ f ba (x) B(x)TGn1DTmDmb*dx = 0 (K-{p+m)) ,

(A.10)

and hence it can be shown that

I D2n (x I f{p+1), b*)dx = 0(K-(p+m))

(A.11)

Together with (A.7) and (A.11), Aopt = 0(nKm-p-2 + K2m) can be obtained. Then rate of convergence of the penalized spline estimator with Aopt is + 3)) when K =

0(n1/(2p+3)), which is detailed in Yoshida and Naito [22]. □

Conflict of Interests

The author declares that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

The author also would like to thank two anonymous referees

for their careful reading and comments to improve the paper.

References

[1] F. O'Sullivan, "A statistical perspective on ill-posed inverse problems," Statistical Science, vol. 1, no. 4, pp. 502-527,1986.

[2] P. H. C. Eilers and B. D. Marx, "Flexible smoothing with B-splines and penalties," Statistical Science, vol. 11, no. 2, pp. 89121,1996, With comments and a rejoinder by the authors.

[3] B. D. Marx and P. H. C. Eilers, "Direct generalized additive modeling with penalized likelihood," Computational Statistics and Data Analysis, vol. 28, no. 2, pp. 193-209,1998.

[4] D. Ruppert, M. P. Wand, and R. J. Carroll, Semiparametric Regression, Cambridge University Press, Cambridge, UK, 2003.

[5] T. Krivobokova, "Smoothing parameter selection in two frameworks for penalized splines," Journal of the Royal Statistical Society B, vol. 75, no. 4, pp. 725-741, 2013.

[6] P. T. Reiss and R. T. Ogden, "Smoothing parameter selection for a class of semiparametric linear models," Journal of the Royal Statistical Society B, vol. 71, no. 2, pp. 505-523, 2009.

[7] S. N. Wood, "Modelling and smoothing parameter estimation with multiple quadratic penalties," Journal ofthe Royal Statistical Society B, vol. 62, no. 2, pp. 413-428, 2000.

[8] S. N. Wood, "Stable and efficient multiple smoothing parameter estimation for generalized additive models," Journal of the American Statistical Association, vol. 99, no. 467, pp. 673-686, 2004.

[9] S. N. Wood, "Fast stable direct fitting and smoothness selection for generalized additive models," Journal ofthe Royal Statistical Society B, vol. 70, no. 3, pp. 495-518, 2008.

[10] X. Lin and D. Zhang, "Inference in generalized additive mixed models by using smoothing splines," Journal of the Royal Statistical Society B, vol. 61, no. 2, pp. 381-400,1999.

[11] M. P. Wand, "Smoothing and mixed models," Computational Statistics, vol. 18, no. 2, pp. 223-249, 2003.

[12] L. Fahrmeir, T. Kneib, and S. Lang, "Penalized structured additive regression for space-time data: a Bayesian perspective," Statistica Sinica, vol. 14, no. 3, pp. 731-761, 2004.

[13] L. Fahrmeir and T. Kneib, "Propriety of posteriors in structured additive regression models: theory and empirical evidence," Journal of Statistical Planning and Inference, vol. 139, no. 3, pp. 843-859, 2009.

[14] F. Heinzl, L. Fahrmeir, and T. Kneib, "Additive mixed models with Dirichlet process mixture and P-spline priors," Advances in Statistical Analysis, vol. 96, no. 1, pp. 47-68, 2012.

[15] G. Kauermann, "A note on smoothing parameter selection for penalized spline smoothing," Journal of Statistical Planning and Inference, vol. 127, no. 1-2, pp. 53-69, 2005.

[16] P. Hall and J. D. Opsomer, "Theory for penalised spline regression," Biometrika, vol. 92, no. 1, pp. 105-118, 2005.

[17] Y. Li and D. Ruppert, "On the asymptotics of penalized splines," Biometrika, vol. 95, no. 2, pp. 415-436, 2008.

[18] G. Claeskens, T. Krivobokova, and J. D. Opsomer, "Asymptotic properties of penalized spline estimators," Biometrika, vol. 96, no. 3, pp. 529-544, 2009.

[19] G. Kauermann, T. Krivobokova, and L. Fahrmeir, "Some asymptotic results on generalized penalized spline smoothing,"

Journal ofthe Royal Statistical Society B, vol. 71, no. 2, pp. 487503, 2009.

[20] X. Wang, J. Shen, and D. Ruppert, "On the asymptotics of penalized spline smoothing," Electronic Journal of Statistics, vol. 5, pp. 1-17, 2011.

[21] T. Yoshida and K. Naito, "Asymptotics for penalized additive B-spline regression," Journal ofthe Japan Statistical Society, vol. 42, no. 1, pp. 81-107, 2012.

[22] T. Yoshida and K. Naito, "Asymptotics for penalized splines in generalized additive models," Journal of Nonparametric Statistics, vol. 26, no. 2, pp. 269-289, 2014.

[23] L. Xiao, Y. Li, and D. Ruppert, "Fast bivariate P-splines: the sandwich smoother," Journal of the Royal Statistical Society B, vol. 75, no. 3, pp. 577-599, 2013.

[24] D. Ruppert, S. J. Sheather, and M. P. Wand, "An effective bandwidth selector for local least squares regression," Journal of the American Statistical Association, vol. 90, no. 432, pp. 12571270, 1995.

[25] M. P. Wand and M. C. Jones, Kernel Smoothing, Chapman & Hall, London, UK, 1995.

[26] C. de Boor, A Practical Guide to Splines, Springer, New York, NY, USA, 2001.

[27] D. Ruppert, "Selecting the number of knots for penalized splines," Journal of Computational and Graphical Statistics, vol. 11, no. 4, pp. 735-757, 2002.

[28] S. Zhou, X. Shen, and D. A. Wolfe, "Local asymptotics for regression splines and confidence regions," The Annals of Statistics, vol. 26, no. 5, pp. 1760-1782,1998.

[29] G. K. Robinson, "That BLUP is a good thing: the estimation of random effects," Statistical Science, vol. 6, no. 1, pp. 15-32,1991.

[30] T. J. Hastie and R. J. Tibshirani, Generalized Additive Models, Chapman & Hall, London, UKI, 1990.

[31] T. Yoshida, "Asymptotics for penalized spline estimators in quantile regression," Communications in Statistics-Theory and Methods, 2013.

Copyright of Journal of Probability & Statistics is the property of Hindawi Publishing Corporation and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use.