Scholarly article on topic 'A Note on Almost Sure Central Limit Theorem in the Joint Version for the Maxima and Sums'

A Note on Almost Sure Central Limit Theorem in the Joint Version for the Maxima and Sums Academic research paper on "Mathematics"

0
0
Share paper
Academic journal
J Inequal Appl
OECD Field of science
Keywords
{""}

Academic research paper on topic "A Note on Almost Sure Central Limit Theorem in the Joint Version for the Maxima and Sums"

Hindawi Publishing Corporation Journal of Inequalities and Applications Volume 2010, Article ID 234964,7 pages doi:10.1155/2010/234964

Research Article

A Note on Almost Sure Central Limit Theorem in the Joint Version for the Maxima and Sums

Qing-pei Zang,1 Zhi-xiang Wang,1 and Ke-ang Fu2

1 School of Mathematical Science, Huaiyin Normal University, Huaian 223300, China

2 School of Statistics and Mathematics, Zhejiang Gongshang University, Hangzhou 310018, China

Correspondence should be addressed to Qing-pei Zang, zqphunhu@yahoo.com.cn Received 30 March 2010; Accepted 25 May 2010 Academic Editor: Jewgeni Dshalalow

Copyright © 2010 Qing-pei Zang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Let {Xn; n > 1} be a sequence of independent and identically distributed (i.i.d.) random variables and denote Sn = £n=1 Xk, Mn = max1<k<nXk. In this paper, we investigate the almost sure central limit theorem in the joint version for the maxima and sums. If for some numerical

sequences (an > 0), (bn) we have (Mn - bn)/an ^ G for a nondegenerate distribution G, and f (x,y) is a bounded Lipschitz 1 function, then limn^»(1/Dn) ^n=1 dkf (Sk/Vk, (Mk - bk)/ak) = jj» f (x,y)0(dx)G(dy) almost surely, where ®(x) stands for the standard normal distribution function, Dn = Yni=i dk ,and dk = (exp((logk)a))/k, 0 < a< 1/2.

1. Introduction and Main Results

Let {X,Xn; n > 1} be a sequence of independent and identically distributed (i.i.d.) random

variables and Sn = X Xk,n > 1, Mn = maxi<k<„Xk for n > 1. If E(X) = 0,E(X2) = 1, the k=1

classical almost sure central limit theorem (ASCLT) has the simplest form as follows:

lim --—— Y1A ^ < A =0(x) (1.1)

n ^»log n k=1 Mvk J

almost surely for all x e R, here and in the sequel, I (A) is the indicator function of the event A, and ®(x) stands for the standard normal distribution function. This result was first proved independently by Brosamler [1] and Schatte [2] under a stronger moment condition, since then, this type of almost sure version which mainly dealt with logarithmic average limit theorems has been extended in various directions. Fahrner and Stadtmuller [3] and

Cheng et al. [4] extended this almost sure convergence for partial sums to the case of maxima of i.i.d. random variables. Under some natural conditions, they proved that

Km 1 £1, f Mk_—bk < 1 = (1.2)

n ^^log nf^ k i ak J

almost surely for all x e R, where ak> 0 and bk e R satisfy

P (< x) - G(x) (1.3)

for any continuity point x of G.

For Gaussian sequences, Csaki and Gonchigdanzan [5] investigated the validity of (1.2) maxima of stationary Gaussian sequences under some mild condition. Furthermore, Chen and Lin [6] extended it to nonstationary Gaussian sequences. As for some other dependent random variables, Peligrad and Shao [7] and Dudzinski [8] derived some corresponding results about ASCLT. The almost sure central limit theorem in the joint version for log average in the case of independent and identically distributed random variables is obtained by Peng et al. [9]; a joint version of almost sure limit theorem for log average of maxima and partial sums in the case of stationary Gaussian random variables is derived by Dudzinski [10].

All the above results are related to the almost sure logarithmic version; in this paper; inspired by the results of Berkes and Csaki [11], we further study ASCLT in the joint version for the maxima and partial sums with another weighted sequence (dn). Now, we state our main result as follows.

Theorem 1.1. Let {X,Xn; n > 1} be a sequence of independent and identically distributed (i.i.d.) random variables with non-degenerate common distribution function F, satisfying E(X) = 0 and E(X2) = 1. If for some numerical sequences (an > 0), (bn) one has

G (1.4)

for a non-degenerate distribution G, and f (x, y) is a bounded Lipschitz 1 function, then

JlmD pkf( % Ma= |{>,y)*(dx)G(dy) (1.5)

almost surely, where ®(x) stands for the standard normal distribution function, Dn = £ dk, and

dk = exp((log k)a)/k, 0 < a < 1/2.

Remark 1.2. Since a set of bounded Lipschitz 1 functions is tight in a set of bounded continuous functions, Theorem 1.1 is true for all bounded continuous functions f (x,y).

Remark 1.3. Under the conditions of Theorem 1.1, it can be seen that the result for indicator functions by routine approximation arguments is similar, for example, to those in Lacey and Philipp [12], that is,

lim TTndkl(^k= < x, M-bk < y) = o(x)G(y) (1.6)

^^Untrl \Vk ak /

almost surely.

Example 1.4. The ASCLT has already received applications in many fields, including condensed matter physics, statistical mechanics, ergodic theory and dynamical systems, and control and information and quanitle estimation. As an example, we assume that {X,Xn; n > 1} is a sequence of independent and identically distributed (i.i.d.) random variables with standard normal distribution function ®(x), and in (1.4), we choose

an = (2logn)-1/2, (1.7) 1

bn = (2log n)1/2 - ¿^logn)~V2(log log n + log 4^) (1.8) which imply that (see Leadbetter et al. [13, Theorem 4.3.3])

> A, (1.9)

where A is one of the extreme value distributions, that is,

A(y) = exp{- exp(-y)}. (1.10)

Then, we can derive a corresponding result in Theorem 1.1.

2. Proof of Our Main Result

In this section, denote Sn = ^n=i Xk, Sk,n = Xn=fc+i Xi, Mn = maxi<i£n Xif and Mk,n = maxfc+i<t<nXj, for n > 1, unless it is specially mentioned. Here a << b and a ~ b stand for a = O(b) and a/b ^ 1, respectively. ®(x) is the standard normal distribution function.

Proof of Theorem 1.1. Firstly, by Theorem 1.1. in [14] and our assumptions, we have

lim p(< x,Mn-bn < ^ =®(x)G(y) (2.1)

n^^ \Vn an J

for x,y e R. Then, in view of the dominated convergence theorem, we have

EK-k'M-^) Sij(x,y)°(dx)G(dy). (2.2)

Hence, to complete the proof, it is sufficient to show that

h Mk - bk) / Sk Mk - bk

1 V , (e( Sk Mk - bk\ / Sk Mk - bk

J™ n"ZMH -R, ) - EH —

\Vk ak

almost surely. Let

* = 4- Ef(

Sk_ Mk - bk Vk' ak

For l > k, it follows that

E(ikh)

r fJSk Mk - bk\ J Si Ml - bi w

CovV fV -k-ar-),f\ -i'—))

_ ( ( Sk Mk - bk ) ( Si

Cov( f( -A'-a^)'^ -i'

Sk_ Mk - bk

Vk' ak

Mi - bA _ /%_ Mkii - i ai ) \ —i ai

) f( Sl Mki - bi) _ (Ski Mki - bi)) )'f\ —i' ai ) —i' ai ))

_ ( ( Sk Mk - bk ) J Ski Mkii - bi))

CovV fV -k,—^),f\-l,-аг-))

:= L + L2 + L3.

For L3, by the independence of {Xn; n > 1}, we have

L3 = 0.

Now, we are in a position to estimate Li. We use the fact that f is bounded and Lipschitzian, then it follows that

L1 « E

( s1 Mi - bi) _ ( s1 Mk /(V —i' ai J /(v —i' ,

Mi - Mk'i

<< E min

= Km,(^ M)

« P(Mi = Mkii) = P(Mk > Mkii)

= k I (F(x))i+k-1dF(x)

Journal of Inequalities and Applications Via Cauchy-Schwarz inequality, we have

L2 < E

f — Mk/i -i

A _ J Ski Mk

') f\ -T ai

1 ^ c2 )1/2

< — (ES2k)

« k)1/2

Thus, using (2.6), (2.7), and (2.8), it follows that

|E(^i)| <

for i > k. Then, we have

E(Zdktk) Xdkdi|E(&&)|

k=1i=1

n n / k\1/2

XEdkdiij)

k=11=1 \ 1 /

^ dkdiik^j + ^ dkdiik^j

1<k< <n; i/k<(log n)4

:= T1 + T2.

1<k< <n; i/k>(log n)4

(2.10)

It is obvious that

T2 < X dkdi (log n)-2a < D^ (log n)-2a. (211)

1<k<i<n

In view of the definition of numerical sequence (dl) and by L'Hospital rule and fixed 0 < a < 1/2, we have

Dn ~ a (logn)1-a exp((logn)a) . (2.12)

T1 <£ dk £ exp((l;g l)a)

1<k<n 1<k<l<min(nlk(log n)4a)

< exp((logn)a) E dk E \

1<k<n 1<k<l<min(nlk(log n)4a) (2 13)

« Dn exp((logn)a) log log n « D2n (log log n)(log n)a-1 « in (log n)-(1+e)a

for e > 0 such that e < min(1' (1/a) - 2). From (2.11), (2.13), and the Markov inequality, we derive

E\dklk

> £Dn) « (log n)-{1+e)a

(2.14)

for the above e and 0 < a < 1/2. We can choose subsequence nk = exp(k(1 ¡)/a), where ¡5 > 0 such that (1 + e)(1 - ¡5) > 1. Then, by Borel-Cantelli lemma, we derive

lim D_ Xj = 0

nk y=i

(2.15)

almost surely. For nk < n < nk+1 we have

Dn k=i

DrZdjtj

Dnk j=1

(2.16)

almost surely. Since Dnk+1 /Dnk ^ 1, the convergence of the subsequence implies that the whole sequence converges almost surely. Hence the proof of (1.5) is completed for 0 < a < 1/2. Via the same arguments, we can obtain (1.5) for a = 0. □

Acknowledgments

The authors thank two anonymous referees for their useful comments. They would like to thank professor Zhengyan Lin of Zhejiang University in China for his help. The work has been supported by the Young Excellent Talent Foundation of Huaiyin Normal University.

References

[1] G. A. Brosamler, "An almost everywhere central limit theorem," Mathematical Proceedings of the Cambridge Philosophical Society, vol. 104, no. 3, pp. 561-574,1988.

[2] P. Schatte, "On strong versions of the central limit theorem," Mathematische Nachrichten, vol. 137, pp. 249-256,1988.

[3] I. Fahrner and U. Stadtmuller, "On almost sure max-limit theorems," Statistics & Probability Letters, vol. 37, no. 3, pp. 229-236,1998.

[4] S. Cheng, L. Peng, and Y. Qi, "Almost sure convergence in extreme value theory," Mathematische Nachrichten, vol. 190, pp. 43-50, 1998.

[5] E. Csaki and K. Gonchigdanzan, "Almost sure limit theorems for the maximum of stationary Gaussian sequences," Statistics & Probability Letters, vol. 58, no. 2, pp. 195-203, 2002.

[6] S. Chen and Z. Lin, "Almost sure max-limits for nonstationary Gaussian sequence," Statistics & Probability Letters, vol. 76, no. 11, pp. 1175-1184, 2006.

[7] M. Peligrad and Q. M. Shao, "A note on the almost sure central limit theorem for weakly dependent random variables," Statistics & Probability Letters, vol. 22, no. 2, pp. 131-136,1995.

[8] M. Dudzirtski, "A note on the almost sure central limit theorem for some dependent random variables," Statistics & Probability Letters, vol. 61, no. 1, pp. 31-40, 2003.

[9] Z. Peng, L. Wang, and S. Nadarajah, "Almost sure central limit theorem for partial sums and maxima," Mathematische Nachrichten, vol. 282, no. 4, pp. 632-636, 2009.

[10] M. Dudzirtski, "The almost sure central limit theorems in the joint version for the maxima and sums of certain stationary Gaussian sequences," Statistics & Probability Letters, vol. 78, no. 4, pp. 347-357, 2008.

[11] I. Berkes and E. Csaki, "A universal result in almost sure central limit theory," Stochastic Processes and Their Applications, vol. 94, no. 1, pp. 105-134, 2001.

[12] M. T. Lacey and W. Philipp, "A note on the almost sure central limit theorem," Statistics & Probability Letters, vol. 9, no. 3, pp. 201-205,1990.

[13] M. R. Leadbetter, G. Lindgren, and H. Rootzen, Extremes and Related Properties of Random Sequences and Processes, Springer Series in Statistics, Springer, New York, NY, USA, 1983.

[14] T. Hsing, "A note on the asymptotic independence of the sum and maximum of strongly mixing stationary random variables," The Annals of Probability, vol. 23, no. 2, pp. 938-947,1995.

Copyright of Journal of Inequalities & Applications is the property of Hindawi Publishing Corporation and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use.