Almost sure central limit theorem for products of sums of partial sumsAcademic research paper on "Mathematics"

0
0
Share paper
J Inequal Appl
OECD Field of science
Keywords
{""}

Academic research paper on topic "Almost sure central limit theorem for products of sums of partial sums"

﻿Feng and Wang Journal of Inequalities and Applications (2016) 2016:49 DOI 10.1186/s13660-016-0995-2

O Journal of Inequalities and Applications

a SpringerOpen Journal

RESEARCH Open Access

^ CrossMark

Almost sure central limit theorem for products of sums of partial sums

Fengxiang Feng1,2* and Dingcheng Wang1

"Correspondence: fengfengxiang2013@163.com School of Mathematical Science, University of Electronic Science and Technology of China, Chengdu, Sichuan 611731, P.R.China 2College of Science, Guilin University of Technology, Guilin, Guangxi 541004, P.R.China

Abstract

Considering a sequence of i.i.d. positive random variables, for products of sums of partial sums we establish an almost sure central limit theorem, which holds for some class of unbounded measurable functions.

MSC: 60F15

Keywords: almost sure central limit theorem; products of sums of partial sums; unbounded measurable functions

£ Spri

1 Introduction and main results

Let {Xn; n > 1} be a sequence of random variables and define Sn = ^ "=i Xi. Some results as regards the limit theorem of products ]""[= Sj were obtained in recent years. Rempala and Wesolowski [1] obtained the following asymptotics for products of sums for a sequence of i.i.d. random variables.

Theorem A Let {Xn; n > 1} be a sequence of i.i.d. positive square integrable random variables with EX1 = ¡, the coefficient of variation y = a /¡, where a2 = Var (X1). Then

' J ev2N asn (i.i)

nLi Sk\ Y^ d p*JÏN

ringer

Here and in the sequel, N is a standard normal random variable and d denotes the convergence in distribution.

Gonchigdanzan and Rempala [2] discussed the almost sure central limit theorem (ASCLT) for the products of partial sums and obtained the following result.

Theorem B Let {Xn; n > 1} be a sequence of i.i.d. positive random variables with EX = ¡, Var (X1) = a2 the coefficient of variation y = a/¡. Then

lim —1— ]T1 ,{(%#) Y ^ < x\ = F(x) a.s.for any x e R, (1.2)

logN n=i n I V n'x J J

where F is the distribution function of the random variable . Here and in the sequel, /{•} denotes the indicator function.

Tan and Peng [3] proved the result of Theorem B still holds for some class of unbounded measurable functions and obtained the following result.

Theorem C Let {Xn; n > 1} be a sequence ofi.i.d.positive random variables with EXi = ¡, Var(Xi) = a2, E|X1|3 < ro, the coefficient of variation y = a/¡. Let g(x) be a real valued almost everywhere continuous function on R such that |g (ex)^(x)| < c(1 + |x|)-a with some c >0 and a >5. Then

Jim E M ffiNr) Y^ 1 = f ~g(x) dF(x) a.s.for any x e R, (1.3) N^ro log N n [\ n!¡n / J Jo

where F (■) is the distribution function of the random variable and j>(x) is the density function of the standard normal random variable.

Zhang et al. [4] discussed the almost sure central limit theory for products of sums of partial sums and obtained the following result.

Theorem D Let {X, Xn; n > 1} be a sequence ofi.i.d. positive square integrable random variables with EX = ¡, Var (X) = a2 < ro, the coefficient of variation y = a/¡. Denote Sn = En=1 X,, Tk = £*1 Si. Then

1 " 1 i( 2k nj=1 Tj \ Y7k 1 n^ 10^ E k^ \k!(k + 1)!¡¡V < 1 = F(X) a^.for anyX 6 R, (1.4)

where F(-) is the distribution function of the random variable .

Theorem 1.1 Let {Xn; n > 1} be a sequence ofi.i.d. positive random variables with EX1 = ¡, Var(X1) = a2, E|X1|3 < ro, the coefficient of variation y = a/¡. Let g(x) be a real valued almost everywhere continuous function on R such that |g(evW3x)<£(x)| < c(1 + |x|) a with some c >0 and a >5. Denote Sn = ^n=1 X,, Tk = ^k=1 Si. Then

i 1 g(( 2n nLjTk A ^

N^ro log N n \\n\(n + 1)!¡¡

g(x) dF (x) a.s.for any x e R, (.5)

where F(■) is the distribution function of the random variable . Here and in the

sequel, 0(x) is the density function of the standard normal random variable.

Remark 1 Letf (x) = g(evWlx), t = e^®7^. Then

x = Vl^1081, g(t)=f(/I"l0g ^,

2nWl=i TA [Y " 2Tk

n = n Y M n

k1 k(k + 1)x

=f ,, /iTwrt

^VTon/3 k(k + 1)x/2 y

Since F(x) is the distribution function of the random variable ev®'3N, we can get F(x) = log x), where \$(x) is the distribution function of the standard normal random variable. Hence we have the following: Let f (x)

= g(evWl*) and f (x) be a real valued almost everywhere continuous function on R such that f (x)0(x)| < c(1 + |x|)-a with some c > 0 and a >5, then (1.5) is equivalent to

lim— E 1f( —¡L= E log Tk ^

N^TO loCT AT ±-< VT \ 1/ , /1CW3 ^—'

n^to log N n ^V 10n/3 k(k + 1)x/2 y

f (x)0(x)dx a.s. for any x e R. (.6)

Remark2 By the proof of Theorem 2 of Berkes etal. [5], in orderto prove (1.5), it suffices to show (1.6) holds true forf (x)0(x) = (1 + |x|)-a with a >5. Here and in the sequel, f (x) satisfies f (x)0(x) = (1 + |x|)-a with a >5.

2 Preliminaries

In the following, the notation an ~ bn means that limn^TO an/bn = 1 and an ^ bn means that limsupn^TO |an/bn| < +to. We denote bk,n = E"=k j, ck,n = 2E"=k jj-y, dk,n = , Xi = Xax,Sk = Ek=1 ^i, Sk,n = EL ci,n^i. By Lemma 2.1 of Wu [6], we can get

E 2 10n

ci,n = 2(bi,n — di,n), /jci,n ^ o .

i=1 ' 3

S = —L= Elog- Tk

^v/ioi/3 k-1 k(k + 1)x/2 Note that

Y ^Vk(k + 1)1/2

E / 2 Y.U Sj - k(k + 1)x

= y k(k + 1)x

1 i 2 k j

=7 £ kik+si jc %Xl - 1

1 i 2 k k

= 1 £ ktk+)S £ £(Xl -

a k(k +1)|

¿(k + l - l)(Xl - |)

2(k + l -1) X, -1

k(k+1) a-

k=1 l=1

=v v 2(k+1-l) x

l=1 k=l

k(k + 1)

= ^ Cl,iXl = 5i,i.

By the fact that 1og(1 + x)= x + 2x2, where |x| < 1, S e (-1,0), thus we have

Y = 1 v 1o Tk

' y VT07T3 08 k(k + 1)i/2

= 1 v / Tk \ +_1

Y VT0¿/3 k(k + 1)1/2 / + y -1073 2 Vk(k + 1)1/2

E' Sli Tk T\ TTTZTT

S + ■

j_V Si( 1

RKTR ¿-^ 9 I tft-i-11/9

v/10'73 , y 71073 2 \ k(k + 1)|/2 1

=: , S" + R'. ^10773 i,i i

By the fact that E|X1|2 < to, using the Marcinkiewicz-Zygmund strong large number law, we have

Sk - ki = o(k1/2) a.s.,

k(k +1)1/2

2EU Sj - k(k + 1)|

k(k + 1)i

2| E^S -ji)|

k(k + 1)i

2 Ek: j172 k!72 j_

k(k + 1)i « k2 = k1/2.

1 i 1 l08 i Vrfk V

In order to prove Theorem 1.1, we introduce the following lemmas.

Lemma 2.1 Let X and Y be random variables. Set F(x) = P(X < x), G(x) = P(X + Y < x), then for any e >0 and x e R,

F (x - e) -P( | Y | >e) < G(x) < F (x + e)+P( |Y | > e).

Proof See Lemma 3 on p.16 ofPetrov [7].

Lemma 2.2 Let {Xn; n > 1} be a sequence ofi.i.d. positive random variables. Denote Sn = Yn=1 X' , Fs denotes the distribution function obtainedfrom F by symmetrization and choose L > 0 so large that j^<Lx2 dFs(x) > 1. Then, for any n > 1, X >0, there exists a c >0 such that

( Sn \

sup PI a < —z < a + XI < cX a \ )

holds for X^fn > L.

Proof See (20) on p.73 of Berkes et al. [5]. □

Zk = £ if Y),

=2k +1

Z| = E 1f (Yi)4f Y) <

,„K,! y

(log kY\'

where 1 < ¡3 <(a - 3)/2.

Lemma 2.3 Under the conditions of Theorem 1.1, we get

P(Zk = Zt ,i.o.) = 0. Proof It is easy to get

j ^/(logk) ) for som^1 k 1 '

{Zk = Zk} c {jYii >f -1(k/(logk)ß) for some 2k < i < 2k+^

vTöi/3

% + Ä,

> f -1 (k/(log k)ß) > (2 log k + (a - 2ß) log log k)

for some 2k < i < 2k+1 L

Since |r' | « ^-t7 a.s.; see (2.1). By the law of iterated logarithm (Feller [8], Theorem 2), we get

P(Zk = Zk,i.o.) < P = ö.

viöi73

> (2 log log i + (a - 2ß) log log log i - 0(1))172, i.o.

We complete the proof of Lemma 2.3.

Let G', F', F denote the distribution functions of Y7, , X1, respectively. \$ denotes the distribution function of the standard normal distribution function. Set

/V ( rV \2

^ x2 dF(x)-ij ^ x dF(xn ,

Si = sup

Fi(x)-* -

0i = sup

Gi (x)-\$ -

Obviously ai < 1, lim^œ ai = 1.

Lemma 2.4 Under the conditions of Theorem 1.1, we have

EE( zö2 «

(log W)2ß '

Proof Note that the estimation

V (x)d(Hi(x)-H2(x))

< sup |V(x)| • sup |Hl(x)-H2(x)|

holds for any bounded, measurable function ^ (x) and the distribution functions H1(x), H2 (x). Thus for 2k < i < 2k+1, we get

W2(Yi)l\f (Yi) <

(log k)ß

= / f2(x) dGi(x)

J |x|<ak

<L/2(x)d<!)+« (iogk)2

«La/ Hx)dMx)+e' dk*;

here and in the sequel ak = f 1((logkk)^). Hence, by the Cauchy-Schwarz inequality and the fact thatf (x)4>(x) = (1 + |x|)-a, we obtain

1/2/ 2k+l

E(Z|)2 « E^ £ j ^ f2№)/{f (Yi) ;

2k+l 2k+l

E ^ Ei/ f2(x)d\$(x) + 0i

i=2k+l i i=2k+l |x

" (log k)ß

k2 (log k)2ß

« 2kU2(x)d*(x)+(iogrk)2ii|++i«

Y^ «i

|<ak (1+ |X|)2- ' (log k)2ß ^7 '

By the same methods as that on p.72 of Berkes et al. [5], we get

t/ |x|<ak

• dx «

(1+ |x|)2« (log k)ß+(a+1)/2'

-a<x<a

-a<x<a

Now we estimate 07. By Lemma 2.1, for any e > 0, we have

0i = sup

Gi(x) - \$

< sup|G;(x) -F;(x)| + sup

Fi(x)-\$(-

P(Yi < x)-P^— < x

P(Yi < x)-P

,_ < x - P — < x

*/Wi73 J Vvi

P| ,Si,i + Ri < x ) - P( Sl" < x + s

VTöi/ä

v/1Öi73

Vv^öi/ä

< x + s - P

Vv^öi/ä

,Si,i < x I - PI — < x

v^öi/ä / Vvi

< P(jRij >s) + sup

Vlöi73

< x + s i - P

viöi73

p( Si,i < x ) - P[ — < x

VvTöi/B / VVi

By the Markov inequality and (2.1), we have

, \ E|R'| log i P |R'| > e) -.

By Lemma 2.2, we have

P\ < x + A - A < x

,V1öi/3 / \V1öi/3 By the Berry-Esseen inequality, we have

Si,i ^ J Si , < x - P — < x

\f\ööii3 ) \4i

vTöi/B

< x) - \$(x)

P( — < x ) - \$(x)

«i172 + i172.

Let s = i 1/3, then

logi J_

"i « .1 /<; + TT/iT + Tn + Si.

Therefore, there exists sö > ö such that

0i « — + Si.

By Theorem 1 of Friedman et al. [9], we have

&i x—\ — + Si

E- «E < —.

tri t! i

By the fact that (a + 1)/2 > ¡3, we have

EE zî) 2 «E

N ,2 2k+l .r2

k2 ^ 0i N2

(logk)3+(a+1)/2 f- (logk)23 ¿r> i (logN)23'

k=l k=l i=2k+1

We complete the proof of Lemma 2.4.

Lemma 2.5 Under the conditions of Theorem 1.1, for l > l0, we have

lC0V^Z')l« (logk)f(logl)3: where r is a constant 0 < r < 1/8.

-(l-k)T

Proof For 1 < i < j/2, j > j'0 and any x, y, we first prove

|P( Y < x, Yj < y) - P(Yi < x)P(Y < y) | « , j . Let p = i .By the Chebyshev inequality, we have

710/73

> p1/8)= P

VlQi/B

>,/j P1/8) < i P-1/4 < P1/8 < PT1 ij

where r1 is a constant 0 < t1 < 1/8. By the Markov inequality and (2.1), for j > j0, we have

E^ ^ logj _ ^1/8 logj ^ pT2

P(\Rj\>Pm) < pj = P

where t2 is a constant, 0 < t2 < 1/8. By the Markov inequality, we have

j1/4 i1/4

ci+1,jtSi

710(7 - i)/3

>./ 10^ P1/8

i ci+1,j

< 10 P 3/4(ci+1,j)2 = 130 P3/4(2bi+1,j -2di+1,j)2

= 10 p

« p3/4 « pT3,

& D -is ?j+1"'1 *\)2+(j+i)2-j+i - j

where t3 is a constant that satisfies 0 < t3 < 1/8. By Lemma 2.2 and the fact that p = 7, 1 < i < j/2, we have

ply - 3pi/s < yr-pSj,j Si'\ c;+1jSi < y) « ^^ « ~1/8

yioj - i)/3 Set t = min(r1) r2) t3, 1/8}, we get

P(Yi < x, Yj < y)

= P Y; < x,

= P Y; < x,

710/73

Si,; /j

+ «j < y

+Sj,j S;,[ Ci j + yr-p_ C:+1jS:

V10(/ - i)/3 V10(/' - i)/3

+ «j < y

> P(Y < ^ ^j-Cj < y

- Wy -3p1/8 < Sj,j - S;:i- ^ < y) - p

Ci+1,jSi

y/10(j - i)/3 > p( Y < x,/L-p

y/10(j - i)/3

> p1/8 - p( I«ji>p1/8)

Sj,j sm Ci+ySi < y | - pT

710(7 - ;)/3

= P(Y'<x>KVWjj^ <y) -pT.

We can get a similar upper estimate for P(Yt < x, Yj < y) in the same way. Thus there exists some constant M such that

P(Y; < x, Yj < y)= P(Y; < x)P( Sj/10(j- < y ) + MpT •

A similar argument,

P(Y; < x)P(Yj < y) =p(Y; < x)^Sj/10j - < y) + M'p

holds for some constant M;. Thus we prove that (2.3) holds.

Let Gi,j(x,y) be the joint distribution function of Yi and Yj. By (2.2) and (2.3), for 2k < i < 2k+1, 2l < j < 2l+1, l - k > 3, l > l0, we can get

Covif (Yi)4f (Yi) < —} f (Yj)lif (Yj) < l

(logk)ß J J v " X (log l)ß

I f f (xf (y)d(Gij(x,y) - Gi(x)Gj(y))

|x|<ak |y|<al

|<ak | y|<al

-(l-k-1)T

kl /AT kl

« (log k)3(log l)Aj) « (log k)3(log l)3" • Thus we have

|Cov(z|, Zf)|« (¡o^jo^2-№

We complete the proof of Lemma 2.5. □

Lemma 2.6 Under the conditions of Theorem 1.1, denoting nk = Z*k - EZk, we have X 2 N2

E £ nkl = O

a=w v(log N)23-

Proof It follows from Lemma 2.4 and Lemma 2.5 that Lemma 2.6 also holds true. The proof is similar to that of Lemma 4 of Berkes et al. [5]. So we omit it here. □

3 Proof of theorem

By Lemma 2.6, we have

E( N E nk) = o((log w)1-2ß ).

Letting Nk = [ekÀ], (2ß -1)-1 < k < 1, we get

/ 1 Nk \2 ^Lnk) <

which implies

,Hm Ü7" EE nk = 0 a.s. (3.1)

Nk k=1

Note that for 2k < i < 2k+1, k

Ef (Yi)/ f(Yi) <

(log k)ß

i f(x)dGi(x)= f f (x)d*(*) + / f(x)d(Gi(x)-\$(-)). (3.2)

J|x|<ak J|x|<ak \°i / J|x|<ak \ \°i / /

Set a = /-7/(x) d\$(x). Noting that — < 1, lim^TO — = 1, we have

lim sup

i / (x)d®(-

J\x\<ak \ai

Then by (3.2), (3.3), and (2.2) we get

E/(Yi)l\/(Yi) <

(log kV

i / (x)d\$(^X-

J\x\<ak \ai

J \x\<ak

)d Gi(x)-\$ -

x\<ak V \—i

< Ok(1) +

(log kV

2 1 k 2 Ezt = ^- + Zk(^Ei + Ok(1), \Zk\<1.

i=2k+1

i=2k+1

Using Ef=i 1/i = log L + 0(1) and £~ f < 7,we get

EC h ZD

log 2N+

« N N (jogky E -+ON (1)

k=1 V 6 ' i=2k+1

= 0((log NYp) + ON (1) = ON (1).

Thus by (3.1), we get

^Nk z*

lim ^k=1r k = a a.s.

k^TO log 2Nk+1

Then by Lemma 2.3, we have

k=k1 Zk

lim -= a a.s.

k^TO log 2Nk+1

The relation X <1 implies limk^TO Nk+1/Nk = 1, thus (3.4) and the positivity of the Zk yield

kN=1 Zk

lim -—- = a a.s.,

N^TO log 2N+1

i.e. (1.6) holds for the subsequence N = 2k. Using again the positivity of the terms, we get (1.6). We complete the proof of Theorem 1.1.

Competing interests

The authors declare that they have no competing interests.

Authors' contributions

FF conceived of the study and drafted and completed the manuscript. DW participated in the discussion of the manuscript. FF and DW read and approved the final manuscript.

Acknowledgements

The authors would like to thank the editor and the referees for their very valuable comments by which the quality of the paper has been improved. This research is supported by the National Natural Science Foundation of China (71271042) and the Guangxi China Science Foundation (2013GXNSFAA278003). It is also supported by the Research Project of Guangxi High Institution (YB2014150).

Received: 8 April 2015 Accepted: 1 February 2016 Published online: 09 February 2016 References

1. Rempala, G, Wesolowski, J: Asymptotics for products of sums and ^-statistics. Electron. Commun. Probab. 7,47-54 (2002)

2. Gonchigdanzan, K, Rempala, G: A note on the almost sure limit theorem for the product of partial sums. Appl. Math. Lett. 19,191-196 (2006)

3. Tan, ZQ, Peng, ZX: Almost sure central limit theorem for the product of partial sums. Acta Math. Sci. 29,1689-1698 (2009)

4. Zhang, Y, Yang, XY, Dong, ZS: An almost sure central limit theorem for products of sums of partial sums under association. J. Math. Anal. Appl. 355, 708-716 (2009)

5. Berkes, I, Csaki, E, Horvath, L: Almost sure central limit theorems under minimal conditions. Stat. Probab. Lett. 37,67-76 (1998)

6. Wu, QY: Almost sure central limit theory for products of sums of partial sums. Appl. Math. J. Chin. Univ. Ser. B 27, 169-180 (2012)

7. Petrov, V: Sums of Independent Random Variables. Springer, New York (1975)

8. Feller, W:The law of iterated logarithm for identically distributed random variables. Ann. Math. 47,631-638 (1946)

9. Friedman, N, Katz, M, Koopmans, LH: Convergence rates for the central limit theorem. Proc. Natl. Acad. Sci. USA 56, 1062-1065 (1966)

Submit your manuscript to a SpringerOpen0 journal and benefit from:

► Convenient online submission

► Rigorous peer review

► Immediate publication on acceptance

► Open access: articles freely available online

► High visibility within the field