Scholarly article on topic 'Almost Sure Central Limit Theorem for a Nonstationary Gaussian Sequence'

Almost Sure Central Limit Theorem for a Nonstationary Gaussian Sequence Academic research paper on "Mathematics"

0
0
Share paper
Academic journal
J Inequal Appl
OECD Field of science
Keywords
{""}

Academic research paper on topic "Almost Sure Central Limit Theorem for a Nonstationary Gaussian Sequence"

Hindawi Publishing Corporation Journal of Inequalities and Applications Volume 2010, Article ID 130915,10 pages doi:10.1155/2010/130915

Research Article

Almost Sure Central Limit Theorem for a Nonstationary Gaussian Sequence

Qing-pei Zang

School of Mathematical Science, Huaiyin Normal University, Huaian 223300, China Correspondence should be addressed to Qing-pei Zang, zqphunhu@yahoo.com.cn Received 4 May 2010; Revised 7 July 2010; Accepted 12 August 2010 Academic Editor: Soo Hak Sung

Copyright © 2010 Qing-pei Zang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Let {Xn; n > 1} be a standardized non-stationary Gaussian sequence, and let denote Sn = ^n=1 Xk, on = VVar(Sn). Under some additional condition, let the constants {uni; 1 < i < n,n > 1} satisfy xn=1<1 - ®(Mni)) ^ T as n ^ to for some t > 0 and min1<i<n uni > c(logn)1/2, for some c> 0, then, we have limn^^(1/ log n) ^n=1(1/k)i{nj=1(Xi < Uki),Sjc/ok < x} = e-®(x) almost surely for any x € R, where I (A) is the indicator function of the event A and ®(x) stands for the standard normal distribution function.

1. Introduction

When {X,Xn; n > 1} is a sequence of independent and identically distributed (i.i.d.) random variables and Sn = £n=i Xk/n > 1, Mn = max1<fc<nXfc for n > 1. If E(X) = 0, Var(X) = 1, the so-called almost sure central limit theorem (ASCLT) has the simplest form as follows:

almost surely for all x € R, where I (A) is the indicator function of the event A and ®(x) stands for the standard normal distribution function. This result was first proved independently by Brosamler [1] and Schatte [2] under a stronger moment condition; since then, this type of almost sure version was extended to different directions. For example, Fahrner and Stadtmuller [3] and Cheng et al. [4] extended this almost sure convergence for partial sums to the case of maxima of i.i.d. random variables. Under some natural conditions, they proved as follows:

for all x e R, where ak> 0 and bk e R satisfy

p(Mk - bk

G(x), as k

for any continuity point x of G.

In a related work, Csaki and Gonchigdanzan [5] investigated the validity of (1.2) for maxima of stationary Gaussian sequences under some mild condition whereas Chen and Lin [6] extended it to non-stationary Gaussian sequences. Recently, Dudzinski [7] obtained two-dimensional version for a standardized stationary Gaussian sequence. In this paper, inspired by the above results, we further study ASCLT in the joint version for a non-stationary Gaussian sequence.

2. Main Result

Throughout this paper, let {Xn; n > 1} be a non-stationary standardized normal sequence, and an = \/Var(Sn). Here a << b and a ~ b stand for a = O(b) and a/b ^ 1, respectively. ®(x) is the standard normal distribution function, and $(x) is its density function; C will denote a positive constant although its value may change from one appearance to the next. Now, we state our main result as follows.

Theorem 2.1. Let {Xn; n > 1} be a sequence of non-stationary standardized Gaussian variables with covariance matrix (rij) such that 0 < rj < p^ for i /j, where pn < 1 for all n > 1 and sups>nisi-npt < (log n)1/2/(log log n)Ue, e> 0. If the constants {uni; 1 < i < n,n > 1} satisfy zn=i(1 - ®(um)) ^ t as n ^ go for some t > 0 and mini<i<nuni > c(log n)1/2,for some c > 0,

almost surely for any x e R.

Remark 2.2. The condition sups>n pi « (logn)1/2/(log log n)1+e,e > 0 is inspired by (al) in Dudzinski [8], which is much more weaker.

First, we introduce the following lemmas which will be used to prove our main result.

Lemma 3.1. Under the assumptions of Theorem 2.1, one has

3. Proof

Proof. This lemma comes from Chen and Lin [6].

The following lemma is Theorem 2.1 and Corollary 2.1 in Li and Shao [9].

Lemma 3.2. (1) Let {¡,„} and {-qn} be sequences of standard Gaussian variables with covariance

1\,\r01

V M V \

matrices R1 = (r1) and R0 = (rj), respectively. Put pij = max(\rV\, \r0 \). Then one has

fn \ fn \

p(Hitj < j ) -p(Hinj < "j} )

j ' V-1 / (3.2)

1 ^ / . / 1 x . / 0XV ( "+ "2 rij) - arcsi^ r° 1 1 exp--

1<i<j<n

< 2nj^fà) - arcsin(rj)) exp(-¿(î+j

for any real numbers ui, i = 1,2,...,n.

(2) Let {ln; n > 1} be standard Gaussian variables with rij = Cov(li,lj). Then

p ( Hit; < "j} ) -]1p(tj < "j)

1 ^ii ( "2 + "2

< 4 Z lrijI exp( -

1<i<j <n

2d +1 rij i)/

/or any re«/ numbers ui, i - 1,2,...,n.

Lemma 3.3. Let {Xn} be a sequence o/standard Gaussian variables and satisfy the conditions o/ Theorem 2.1, then/or 1 < k <n, one has

P( H {Xi < um},^ < y) - P[ fi {Xi < uni},^ < y) < n + --(3.4)

V i-k+1 On J Vi-1 °n J n (log log n) 1+£

/or any y e R.

Proo/. By the conditions of Theorem 2.1, we have

On -4 in + 2 ^T rij > Vn,

1<i<j<n

then, for 1 < i < n, by sups>n £®J-n p « (logn)1/2/(loglogn) ,e > 0, it follows that

(tog n)1/2

CovfX, <-L + -L jTpk « . ■

\ On J Vn v^k-! log log n)

Then, there exist numbers 6, n0, such that, for any n > n0, we have

sup Cov( Xi, — ^ < ô < 1

1<i<n \ On

We can write that

L := p( Q {Xi < um},SL < y) - P[f] {Xi < uni},^ < y

P( n {Xi < Uni}, < y) - P( H {Xi < Uni} j P(Yn < y)

P( Q{Xi < Uni}, ^ < y ) - P( f){Xi < Uni} )P(Yn < y)

+ ( P( H {Xi < Uni})- P( Q{Xi < Uni}

Vi=fc+1 / \i=1

=: L1 + L2 + L3,

where {Yn} is a random variable, which has the same distribution as {Sn/an}, but it is independent of (X1,X2,...,Xn). For L1,L2, apply Lemma3.2 (1) with (^ = Xi, i = 1,...,n; ¿n+1 = Sn/an), (nj = Xj, j = 1,..., n; nn+1 = Yn). Then r1j = r°j = rj for 1 < i < j < n and r! = Cov(Xi, Sn/an),r°ij = 0 for 1 < i < n, j = n + 1. Thus, we have (for i = 1,2)

L < £CovfXi,S^N) exp( - . ,, ). (3.9)

i £f V aj F \ 2(1 + Cov(Xi,Sn/an)) J v '

n\_.._/ Uni + y

Since (3.5), (3.7) hold, we obtain

\ 1/2 n / „2

l < -^¡ogflo;^ ex< - ). (310)

Now define Un by 1 - 0(Un) = 1/n. By the well-known fact

1 - ®(x) ~ , x -^g, (3.11)

it is easy to see that

Un\ -\Z2nUn

exp( -2 , Un ~V2logn. (3.12)

Thus, according to the assumption min1<i<nUni > c(log n)1/2, we have Uni > cUn for some c > 0. Hence

Li < _(l0gK) 1/2 exp( —,

Vn(log log n)1+e i<i<„ V 2(1 + 6)

Vn(log n)1/2 / u < , m exp

(log log n)i+e ' \ 2(1 + 6)

(2+6)/(1+6)

V2lognJ

n1/(1+6)(log log n) (2+6)/(1+6)

/ I-\ (

(vlogn

n1/(1+6)-(1/2)

(3.13)

<< -T, 6' > 0.

Now, we are in a position to estimate L3. Observe that

L3 = P( H (Xi < UniH- W Rl^i < um}

,i=k+1

n H (Xi < Uni}) - fl °(Uni)

,i=k+1

'(n (Xi

P^n(Xi < Uni} j — n°(Uni)

\ ®(Uni) — fl°(Uni)

=: L31 + L32 + L33. For L33, it follows that

L33 = n°(Uni)( 1 — Tl °(Uni) i=k+1 \ i=1

<< 1 — Ok (Un)

1-n-1 V^.

(3.14)

(3.15)

By Lemma 3.2 (2), we have

L3i < 4 E rij exp(—

1<i<j<n

22 U • + U2 •

2(1 + rij )

i = 1,2.

(3.16)

Thus by Lemma 3.1 we obtain the desired result.

Lemma 3.4. Let {Xn} be a sequence o/ standard Gaussian variables satis/ying the conditions o/ Theorem 2.1, then/or 1 < k <n, any y e R, one has

Cov( l( HX < ukl}Ak < y\l( H X < "ni}, < y

k 0°gn)

(3.17)

n (log log n)1+e (log log n)1+e

Proof. Apply Lemma 3.2 (1) with (1 = Xir 1 < i < k, 1k+1 = Sk/ok, 1i+1 = Xi, k + 1 < i < n, 1n+2 = S„/o„), (n = 1j, 1 < j < k + 1, nj = lj, k + 2 < j < n + 2), where (4+2,.. .,1n+2) has the same distribution as (1k+2,.. .,1n+2), but it is independent of (1k+2,. ..,1n+2). Then,

r]j = r0j for 1 < i < j < k + 1 or k + 2 < i< j < n + 2;

A - ri(j-1), r0 - 0 for 1 < i < k, k + :

A - Covixi, A - 0 for 1 < i < k, j - n + 2;

ij i On ij

j - CovfXi, —\ r0 - 0 for k + 1 < i < n, j - k + 1;

j i Ok ij

rij - Covf—,—}, r°: - 0 for i - k + 1, j - n + 2.

ij Ok On ij

(3.18)

Thus, combined with (3.5), (3.7), it follows that

Cov( l( H {Xi < uki}, ^ < f+ {Xi < uni}, ST < y

< uw}, f| {Xi < uni}, — < y, — < y

-p( H^ < uki},— < y)p( H {Xi < uni},— < y Ok i-k+1 On

4 ^ ^ rij exp

1<i<k k+1<j<n

(-ë) + 4H-D exp(

22 u2ki + y2

^ + nj)J 4^

1JH Xi,D exp(

2(1 + Cov(Xi,Sn/On))

22 u2ni + y2

2(1 + Cov(Xi,Sk/ok)) J 4

+ iCov

/Sk Sn\

\Ok' On/

< 4X E rij expv 2f1 + r

1<i<k k+1<j<n \ \ ' ^

22 u2ki + u2nj

n, " 2(1+0)

1 X Cov(^^ Jexp

j-)+4 JH exp(

/Sk Sn\

\Ok' On/

v (Sk S „„ ^ i + tCov —, — 2(1 + Ô) 4 \ok On

-: T1 + T2 + T3 + T4.

(3.19)

Journal of Inequalities and Applications Using Lemma 3.1, we have

T1 <-1+7, £ > 0. (3.20)

(log log n) +£

By the similar technique that was applied to prove (3.10), we obtain

T2 < —, a> 0. (3.21)

2 na v ;

For T3, by sups>n Zt1-nPi < (logn)1/2/(log log n) 1+£,£ > 0, and (3.12), we have

T3 < exp( - ^^ ) ^Cov( X,

<<—I— V CovfXl,Sk n1/(1+5) i=k+1 V ak

1 1 n < n^r :/k=kCoviX,,Sl)

1 1 l n

n1/(1+6) vj „1+1

E £Cov(Xi,Xj) (3.22)

1 1 l n

< nw^) yj HPi

Vl (log n)

« \ O /

n1/(1+5) (log log n)

<n?, ?>0.

As to T4, by (3.5) and (3.6), we have

T4 < 1 jCov(^ . (3.23)

ak i=1 V an/ V n (log log n)

Thus the proof of this lemma is completed. □

Proof of Theorem 2.1. First, by assumptions and Theorem 6.1.3 in Leadbetter et al. [10], we have

p|q(Xi < Uni) J . (3.24)

Let Yn denote a random variable which has the same distribution as Sn/an, but it is independent of (X1, X2,..., Xn), then by (3.10), we derive

p| H(Xi < Uni), < y\ - p| p|(Xi < Um) |p {Yn < y} -i 0, as n -i g .

(3.25)

Thus, by the standard normal property of Yn, we have

lim pj 0(Xi < Uni),— < y\ = e- y), y e R. (3.26)

nI i=1 an 1

Hence, to complete the proof, it is sufficient to show

il-I^ j K/j Q(Xi < Uli)S < 4 - P{ Q(Xi < Uli)S < = 0 (3.27)

In order to show this, by Lemma 3.1 in Csaki and Gonchigdanzan [5], we only need to prove

Var( —jlJQ(Xi < Uii),— < xl ) <<-1-1-, (3.28)

1 log nf=1l [i=f " } ai~ \) (log log n)1+£

for £> 0 and anyx e R. Let nk = I {flt^X < Uki), Sk/ak < x}-P ^t=1(Xi < Uki), Sk/ak < x}. Then

Var(i j l'{Q(X- ^ < x}) ■ i 4 j l ni)'

= j1 Elm I2 + rV j ^ ("29)

log n l=1l log n 1<l<Kn li

=: S1 + S .

Since |nl |< 2, it follows that

S1 « —T-. (3.30)

Now, we turn to estimate S2. Observe that for l > k

\E(n*m) \ =

Cov( l( f|{Xi < < x),I[ flX < <

Cov( l( QX < Uki}, ST < xjJy^Q{Xi < uu},S < x

-It ff {Xi < Uii},S <

< Uki},-- < x),I[ ff {Xi < Uii},— <

'(ff {Xi < Uii},- < x) - i( ff {Xi < Uii},- < x

cov(i i( Q {Xi < Uki},—;

< x ff {Xi < Uii},— < x

(3.31)

=: S21 + S22.

By Lemma 3.3, we have

S21 < j + , A1+, • (3.32) i (loS log 0

Using Lemma 3.4, it follows that

S22 S1 k <'°g 0+ C ^. (3.33)

(log log 0 +£ (log log 0 +£

Hence for i > k, we have

\E(nkni) \ < j

Ik Cog I)

(log log 0

0°g log 0

1+£ '

(3.34)

Consequently

s2«y if kMk Qogo1/2 Y\ + y i

2 « log2^ 1<Ù<nkl\l V l (log log 0^/ / + 1<k<<nkl(log log l)

„ 1 y 1 + (log n)1/2 y

log2n 1<k<l<nl2 log2n (log log n)

1 n 1 l— 1 1

y 1 V1 (3.35)

log2n £3 l(log log l) ^é!k

1 , 1 , 1 y log l

« 1-+

log n' ^Jîogn(log log n) 1+£ log2n £3 l (log log l)1+fi

11 « 1-+

log n (log log n)

,1+fi'

Thus, we complete the proof of (3.28) by (3.30) and (3.35). Further, our main result is proved.

Acknowledgments

The author thanks the referees for pointing out some errors in a previous version, as well as for several comments that have led to improvements in this paper. The authors would like to thank Professor Zuoxiang Peng of Southwest University in China for his help. The paper has been supported by the young excellent talent foundation of Huaiyin Normal University.

References

[1] G. A. Brosamler, "An almost everywhere central limit theorem," Mathematical Proceedings of the Cambridge Philosophical Society, vol. 104, no. 3, pp. 561-574,1988.

[2] P. Schatte, "On strong versions of the central limit theorem," Mathematische Nachrichten, vol. 137, pp. 249-256,1988.

[3] I. Fahrner and U. Stadtmuller, "On almost sure max-limit theorems," Statistics & Probability Letters, vol. 37, no. 3, pp. 229-236,1998.

[4] S. Cheng, L. Peng, and Y. Qi, "Almost sure convergence in extreme value theory," Mathematische Nachrichten, vol. 190, pp. 43-50, 1998.

[5] E. Csaki and K. Gonchigdanzan, "Almost sure limit theorems for the maximum of stationary Gaussian sequences," Statistics & Probability Letters, vol. 58, no. 2, pp. 195-203, 2002.

[6] S. Chen and Z. Lin, "Almost sure max-limits for nonstationary Gaussian sequence," Statistics & Probability Letters, vol. 76, no. 11, pp. 1175-1184, 2006.

[7] M. Dudzinski, "The almost sure central limit theorems in the joint version for the maxima and sums of certain stationary Gaussian sequences," Statistics & Probability Letters, vol. 78, no. 4, pp. 347-357, 2008.

[8] M. Dudzinski, "An almost sure limit theorem for the maxima and sums of stationary Gaussian sequences," Probability and Mathematical Statistics, vol. 23, no. 1, pp. 139-152, 2003.

[9] W. V. Li and Q. Shao, "A normal comparison inequality and its applications," Probability Theory and Related Fields, vol. 122, no. 4, pp. 494-508, 2002.

[10] M. R. Leadbetter, G. Lindgren, and H. Rootzen, Extremes and Related Properties of Random Sequences and Processes, Springer Series in Statistics, Springer, New York, NY, USA, 1983.

Copyright of Journal of Inequalities & Applications is the property of Hindawi Publishing Corporation and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use.