Scholarly article on topic 'On Complete Convergence for Weighted Sums of -Mixing Random Variables'

On Complete Convergence for Weighted Sums of -Mixing Random Variables Academic research paper on "Mathematics"

0
0
Share paper
Academic journal
J Inequal Appl
OECD Field of science
Keywords
{""}

Academic research paper on topic "On Complete Convergence for Weighted Sums of -Mixing Random Variables"

Hindawi Publishing Corporation Journal of Inequalities and Applications Volume 2010, Article ID 372390,13 pages doi:10.1155/2010/372390

Research Article

On Complete Convergence for Weighted Sums of ^-Mixing Random Variables

Wang Xuejun, Hu Shuhe, Yang Wenzhi, and Shen Yan

School of Mathematical Science, Anhui University, Hefei 230039, China Correspondence should be addressed to Hu Shuhe, hushuhe@263.net Received 4 February 2010; Revised 8 May 2010; Accepted 11 June 2010 Academic Editor: Andrei Volodin

Copyright © 2010 Wang Xuejun et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Some results on complete convergence for weighted are presented, where {Xn,

n > 1} is a sequence of ^-mixing random variables and {ani, n > 1,i > 1} is an array of constants. They generalize the corresponding results for i.i.d sequence to the case of ^-mixing sequence.

1. Introduction

Let {Xn, n > 1} be a sequence of random variables defined on a fixed probability space (Q, F,P). Let n and m be positive integers. Write Fn = c(Xi, n < i < m). Given c-algebras B, R in F, let

y(B,R)= sup \P(B \ A) - P(B)|. (1.1)

AeB,BeR,P (A)>0

Define the ^-mixing coefficients by

f(n) = sup JF+V n > 0. (1.2)

k>1 x '

Definition 1.1. A random variable sequence {Xn, n > 1} is said to be a ^-mixing random variable sequence if y(n) j 0 as n ^ to.

^-mixing random variables were introduced by Dobrushin [1] and many applications have been found. See, for example, Dobrushin [1], Utev [2], and Chen [3] for central limit

theorem, Herrndorf [4] and Peligrad [5] for weak invariance principle, Sen [6, 7] for weak convergence of empirical processes, Shao [8] for almost sure invariance principles, Hu and Wang [9] for large deviations, and so forth. When these are compared with the corresponding results of independent random variable sequences, there still remains much to be desired.

Throughout the paper, let I (A) be the indicator function of the set A. We assume that $(x) is a positive increasing function on (0, to) satisfying $(x) Tto as x ^ to and f (x) is the inverse function of $(x). Since $(x) T to, it follows that f (x) T to. For easy notation, we let $(0) = 0 and f (0) = 0. an = O(bn) denotes that there exists a positive constant C such that \an/bn\< C. C denotes a positive constant which may be different in various places.

Let {X,Xn, n > 1} be a sequence of i.i.d. random variables and let {a™, n > 1,i > 1} be an array of constants. The almost sure limiting behavior of weighted sum^n=i aniXi was studied by many authors; see, for example, Choi and Sung [10], Cuzick [11], Wu [12], and Sung [13,14], and so forth.

The main purpose of this paper is to extend the complete convergence for weighted sums Xn=1 aniXi of i.i.d. random variables to the case of y-mixing random variables.

Definition 1.2. A sequence {Xn, n > 1} of random variables is said to be stochastically dominated by a random variable X if there exists a positive constant C, such that

P(\Xn\ >x) < CP(\X\ > x) (1.3)

for all x > 0 and n > 1.

Definition 1.3. A double array {ani, n > 1, i > 1} of real numbers is said to be a Toeplitz array if limn ^TOani = 0 for each i > 1 and

Em < C (1.4)

for all n > 1, where C is a positive constant.

Lemma 1.4. Let {Xn, n > 1} be a sequence of random variables which is stochastically dominated by a random variable X. For any a> 0 and b> 0, the following statement holds:

E\Xn\aI(\Xn\ < b) < C{E\X\aI(\X\ < b) + baP(\X\ > b)}, (1.5)

where C is a positive constant.

Lemma 1.5 (cf. [15, Lemma 1.2.8]). Let {Xn, n > 1} be a sequence of y-mixing random variables. Let X e Lp(F\), Y e Lq(?TO+n), p > 1, q > 1, and 1/p + 1/q = 1. Then

\EXY - EXEY\ < 2(y(n))1/p(E\X\p)1/p(E\Y\q)1/q. (1.6)

Lemma 1.6 (cf. [8, Lemma 2.2]). Let {Xn, n > 1} be a y-mixing sequence. Put Ta(n) = £ta=+a+1 Xi. Suppose that there exists an array {Ca,n} of positive numbers such that

ET^(n) < Can for every a > 0, n > 1.

Then for every q > 2, there exists a constant C depending only on q and y(-) such that

E( mix^)|q) - C

a/ + E( max |Xt|q

a+1<i<a+n

for every a > 0 and n > 1.

Lemma 1.7. Let {Xn, n > 1} be a sequence of y-mixing random variables satisfying £G= y1/2(n) < go. q > 2. Assume that EXn = 0 and EiXnlq < to for each n > 1. Then there exists a constant C depending only on q and y(-) such that

i=a+1 /

a+n \ q/2 %EIXiIq + ( X EX2

i=a+1 \i=a+1

for every a > 0 and n > 1. In particular, one has

for every n > 1.

Proof. By Lemma 1.5, we can see that

%EIXiIq + ^EX

n \ q/2' 2

(1.10)

ei'ZXA <£ex2 + 4 £ y1/2a-i)(EX2)1/2(EX2)1/2

Vi=a+1 / i=a+1 a+\<i<j<a+n

a+n n-1 a+n-k

< X EXi2 + 2^ £y1/2(k)(EX2 + EX\+)

i=a+1 k=1 i=a+1

/ go \ a+n

< ( 1 + ^ ym(k))Z EX2 .Can,

(1.11)

which implies (1.7). By Lemma 1.6, we can get the desired result (1.9) immediately. The proof is complete. □

Lemma 1.8. Assume that the inverse function f (x) of $(x) satisfies

n1 'S fO

f (n)H f(i) = °(n)-

(1.12)

If E[#(IXI)] < g, then

EIXKIXI >f (n)) < G.

(1.13)

Proof. The proof is similar to that of Lemma 1 by Sung [14]. So we omit it. □

2. Main Results and Their Proofs

Theorem 2.1. Let {X,Xn, n > 1} be a sequence of identically distributed y-mixing random variables satisfying XGGU y1/2(n) < go, EX = 0, EX2 < go, and E[$(|X|)] < to. Assume that the inverse function y (x) of $(x) satisfies (1.12). Let {ani, n > 1, i > 1} bean array of constants such that

(i) max1<i<n| ani | = O(1/y (n));

(ii) m=\ a2ni = O(log-1-an) for some a> 0.

Then for any e> 0,

Vn-1P( max

V 1<j<n

> £ < 00.

Proof. For each n > 1, denote

j = XjI( \Xj \ < y(n)), j = £ (aniX((n) - EaniXf), 1 < j < n,

n n n n

A = HX = X<n)) = H(|Xi| < y(n)), B = A = y(Xi = X<n)) = y(|Xi| > y(n)),

En = max

It is easy to check that

XaniXi = £aniXiI(|Xi| < y(n)) + ^aMXiI(|Xi| > y(n)) i=1 i=1 i=1

= j + ¿EaniX^ + jamXiI(|Xi| > y(n)),

En — EnA + E nB = max

c I maxl T(n) I > ^ max

T(n)+£ EamX(

EaniXi(

> £ ) + EnB

Journal of Inequalities and Applications Therefore

P(En) < Pi maxi T(n) I > e - max

vl</<nl j i 1<j<n

^EamXi(

< Pi maxi T(n) I > e - max

Vl</<nl j I 1<j<n

^EaniXlf ¿=1

^P(|Xi| > —(n)). ¿=1

Firstly, we will show that

1<j <n

^EamX(

0, as n —> to .

It follows from Lemma 1.8 and Kronecker's lemma that

—— E|X|I(|X| > —(¿)) 0, as n to. —(n) ¿=1

By EX = 0, condition (i), (2.6), and y (n) T g, we can see that

^EaniX. ¿=1

1<j <n

YßaniXr!(X > — (n))

< £E|an¿X¿|I(|X¿| > — (n))

¿=1 n

^ 2|ani|E|X|^ |X| >—(n))

< —(^XE|XK|X| >— (¿)) — 0, as n

which implies (2.5). By (2.4) and (2.5), we can see that, for sufficiently large n,

>A < maxIj I > 2)+¿p (|Xi| > y (n)).

\ 1<j<n

To prove (2.1), it suffices to show that

Vn'pfmax\T(n)\ >£) < to,

n=1 \1<j<rn j \ 2)

£n-1£P(|Xi| >y(n)) < to.

By Markov's inequality, Lemma 1.7, EX2 < œ, and condition (ii), we have

œ / l \ œ / 2

V«-1^max|x(n)I > ^ < ^n-1^maxMn)| ^ V 1<J<"' ; I 2) - £ \i<j<n\ j I

œ n 2

< Cj>-1£E|arax(n)|

n= 1 i=1 œn

= C£n-1£a2mEX2I(|X|< y (n))

n= 1 i=1 œn

< C^n-1^

n=1 i=1

(2.10)

< ^n-1log-1-ttn <

It follows from E[0(|X|)] < œ that

£n-1£P(|Xj| > y(n)) = £P(|X| > y(n)) = £P(^(|X|) >n) < CE[<K|X|)] < œ. (2.11)

n=1 i=1

We complete the proof of the theorem.

Theorem 2.2. Let {Xn, n > 1} be a sequence of y-mixing random variables satisfying £œ=1 y1/2 (n) < œ and let {ani, n > 1, i > 1} bean array of real numbers. Let {bn, n > 1} bean increasing sequence of positive integers and let {cn, n > 1} be a sequence of positive real numbers. If for some q > 2,0 <t< 2, and for any e> 0, the following conditions are satisfied:

fjCn^P^^mX^ ebi/t)< œ,

n=1 i=1

^Cnbnq/tZan^EX^I^|araXi| < ebi") < œ,

n=1 i=1

c b~q/t

2a2mEXll(|araXi| < eb1n/t) i=1

(2.12)

(2.13)

(2.14)

Y.cnP< max

= 1<i<b n=1

^ \anjXj - anjEXjlÇ^njXj | < eb1n/t^j j=1

> ebl/t ^ < œ.

(2.15)

Proof. Note that if the series ^G=1 °n is convergent, then (2.15) holds. Therefore, we will consider only such sequences {cn, n > 1} for which the series ^G=1 °n is divergent.

Journal of Inequalities and Applications Let

Yi(n) = amXiI(amXi| < £b1n/t), S'm = ¿Y(n), n > 1, i > 1,

A = H{Yi(n) = amX,}, B = A = (j{Yi(n) = amX= Q(|aniXi| > ^^

En = max

1<i<b„

^ [anjXj - anjEXjI(\ anjXj \ < £bn/t)] j=1

£b1n/t

(2.16)

Therefore

M maX ^ [anjXnj - anjEXjI (\ anjXj \ <

L <i<" j=! J

= P(En) = P(EnA) + P(EnB) < P(EnA) + P(B)

bn / \ £ < £p(|aniXi| > £bn/^ + £-'b„l/t^max jS^i - ES^)

(2.17)

Using the Cr inequality and Jensen's inequality, we can estimate E|Yi(n) - EY^f in the following way:

E\Yi(n) -EY^^ < C^EX|fI(^aXil <

(2.18)

By (2.17), (2.18), and Lemma 1.7, we can get

1< i<b„

£ \a-njXj - anjEXjI (\ anjXj \ <

£b1n/t

j=1 bn

>£bl/t

< C%p(amX^ > £b1n/^ + Cbnq/tZ|ani|fE|Xi|f^amX^ < b/) (2.19)

i=1 i=1

2a2mEXlI(anX < £b1n^t) i=1

Therefore, we can conclude that (2.15) holds by (2.12), (2.13), (2.14), and (2.19).

Theorem 2.3. Let 1 < p < 2 and let {Xn, n > 1} be a sequence of y-mixing random variables satisfying XG=1 y1/2(n) < g, EXn = 0, and EI Xn I p < gfor n > 1. Let {ani, n > 1, i > 1} be an array of real numbers satisfying the following condition:

^I ani I pEI Xi Ip = o(n6) as n g i=1

(2.20)

for some 0 <6 < 2/q and q > 2. Then for any e> 0 and ap > 1,

Ynap-2P[ max

1<i<n n=1 \--

> ena <oo.

(2.21)

Proof. Take cn = nap 2, bn = n, and 1/t = a in Theorem 2.2. By (2.20) we have

|>;gp(IaniXiI > .e) < C2nap-2% IannEXif < ^n-2+6 < g,

n=1 i=1

n=1 i=1

G G n G

Yfnb-rq/iY^iamiqEiXiiql(ianiXii < eb1/) < £n-2£ianiipeix.ip < ^n"2+6 < g

n=1 i=1

n=1 i=1

g q/t 'Z Zcnbn

2a2mEXll(iamXii < eb^) i=1

< ^nap-2-apq/2^ianiipEiXiip

< C2rnap-2-apq/2+6q/2 < ^nap(1-q/2)-1 < g

n=1 n=1

(2.22)

following from 6q/2 < 1. By the assumption EXn = 0 for n > 1 and (2.20) we get

—max

na 1<i<n

XanjEXjI( | anjXj | < ena)

< |an,EXjI(|anjXj 1 <ena )|

= | anjEXj1 (j | > ena) |

j=1 1n

< |anj|pE|^Cj|p < Cn6-ap — 0,

(2.23)

following from 6 < 1 and ap > 1. We get the desired result by Theorem 2.2 immediately. The proof iscompleted. □

Theorem 2.4. Let 1 < p < 2 and let {Xn, n > 1} be a sequence of y-mixing random variables satisfying XGG=i y1/2(n) < g, EXn = 0, and EX^ < g for n > 1. Assume that the random variables are stochastically dominated by a random variable X such that EXf < g and let {ani, n > 1, i > 1} be an array of real numbers satisfying the following condition:

anif = o(m6^ as n —>g

(2.24)

for some 0 <6 < 2/q and q > 2. Then for any £> 0 and ap > 1, (2.21) holds. Proof. The proof is similar to that of Theorem 2.3. We only need to note that

/ = f jl

EXf = tpdP(|Xn| < t)

= - tfdP(|Xn| > t) j0

= - lim tpP(|Xn| > t) + P(|Xn| > t)dtp

¡•CO

= 0 + p tp-1P(|Xn| >t)dt jq

tp-1P(|X| > t)dt

= cexf < g

for each n 1.

(2.25)

Theorem 2.5. Let {Xn, n > 1} be a sequence of y-mixing random variables satisfying £G= i y1/2 (n) < g and let {ani, n > 1, i > 1} be a Toeplitz array. Assume that the random variables are stochastically dominated by a random variable X. If for some 0 <t< 2 and 6 > 1/t,

sup|ani| = O(n1/t-6Y EXf < g, i>1 v '

(2.26)

where f = max(2/6,1 + 1/6), then for any £> 0,

anj Xj

£n1/t < .

(2.27)

Proof. Take cn = 1, bn = n for n > 1 and q > max(2,1 + 1/6) in Theorem 2.2. Then we can see that (2.12) and (2.13) are satisfied. In fact, by (1.4) and (2.26) we have

G bn G n

^c^^ I amXiI > ebrni) = ££p( I amXiI > enl/i)

n=1 i=1 n=1 i=1

< ^EKIaniXI> Cn1/t)

n=1i=1

< XI > Cn6)

n=1i=1

= C^n^p(Ck6 < XI < C(k + 1)6)

n=1 k=n

< C^k2P(Ck6 < XI < C(k + 1)6)

< ceixi2/6 < g

(2.28)

and by Lemma 1.4, (1.5), and (2.26) we have

G !in / \ YjCnbn YuIamIqEIXiIqI(IamXiI < eb1/)

n=1 i=1

Znq/tZIamIqEIXiIql(IamXiI < en1/t)

n=1 i=1

< CZnq/tZIarnI

n=1 i=1

EIXIq^IamXI < enl/t) + ^p(IamXI > en1/t)

G n G n

< ^n-(1+1/6)/tX I amI 1+1/6E I X I 1+1/6 + CZZP(I anXI > en1/t)

n=1 i=1 n=1 i=1

< ^n-1/t-1EIXI1+1/^IaniI + CEIXI

< cZm-1/t-1 + CEiXi1/6 < g.

(2.29)

In order to prove that (2.14) holds, we should consider the following two cases.

In the case 6> 1, by Lemma 1.4, (1.5), (2.26), and Cr inequality, we have

^a2niEX2^aM < b/)

J\a2niEX2li(aniXi| <£n1/t) i=1

< C^n-q/2t-q/26t [ ^|ani|!+1/6E|X|

q/2 g n

|1+1/M + C

n=1 i=1

< ^n-q/2t-q/26tn(1/6)(1/t-6)(q/2^E|x|1+1/^q/^ |ara|) + ce|x|

n=1 i=1

££P(|aniX|> £n1/t)

-q/2t-q/2

= ^n-(q/2)(1+1/t) + ce|x|2/6 < g.

(2.30)

In the case 0 <6 < 1, we can get

Za2mEX2I(amX^ < £b1n/t)

%a2mEX2I(amXi| <£n1/t) i=1

q/tn(1/t-6)(q/2)

¿|ani|EX2 ) + ^¿K|aniX| > £n1/t)

i=1 / n=1 i=1

< cymi-q/2t-q6/\ex2) q/2( ¿>ra|) + ce|x|

n=1 \i=1

(q/2)(6+1/t) + 2/6

< C n-

+ cex2'6 < g.

(2.31)

To complete the proof of the theorem, we only need to prove

n-1/tmax

^anjEXjI(\anjXj\ < ml/t^)

0, as n —> g .

(2.32)

Indeed, by Lemma 1.4, it follows that

n 1/(max

Yaa-njEXjl(\unjXj\ <enl/i)

< Cn-17^\anj\E|X| + \anjX\ > en1/i)

(2.33)

< Cn-17i + C^P^\anjX\ > enyi) 0, as n

Thus we get the desired result.

Acknowledgments

The authors are most grateful to the Editor Andrei I. Volodin and anonymous referee for the careful reading of the manuscript and valuable suggestions which helped in significantly improving an earlier version of this paper. This work was supported by the National Natural Science Foundation of China (10871001, 60803059), Provincial Natural Science Research Project of Anhui Colleges (KJ2010A005), Talents Youth Fund of Anhui Province Universities (2010SQRL016ZD), and Youth Science Research Fund of Anhui University (2009QN011A).

References

[1] R. L. Dobrushin, "The central limit theorem for non-stationary Markov chain," Theory of Probability and Its Applications, vol. 1, pp. 72-88,1956.

[2] S. A. Utev, "The central limit theorem for ^-mixing arrays of random variables," Theory of Probability and Its Applications, vol. 35, no. 1, pp. 131-139,1990.

[3] D. C. Chen, "A uniform central limit theorem for nonuniform ^-mixing random fields," The Annals of Probability, vol. 19, no. 2, pp. 636-649,1991.

[4] N. Herrndorf, "The invariance principle for ^-mixing sequences," Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete, vol. 63, no. 1, pp. 97-108,1983.

[5] M. Peligrad, "An invariance principle for ^-mixing sequences," The Annals of Probability, vol. 13, no. 4, pp. 1304-1313,1985.

[6] P. K. Sen, "A note on weak convergence of empirical processes for sequences of ^-mixing random variables," Annals of Mathematical Statistics, vol. 42, pp. 2131-2133,1971.

[7] P. K. Sen, "Weak convergence of multidimensional empirical processes for stationary ^-mixing processes," The Annals of Probability, vol. 2, no. 1, pp. 147-154,1974.

[8] Q. M. Shao, "Almost sure invariance principles for mixing sequences of random variables," Stochastic Processes and Their Applications, vol. 48, no. 2, pp. 319-334,1993.

[9] S. Hu and X. Wang, "Large deviations for some dependent sequences," Acta Mathematica Scientia. Series B, vol. 28, no. 2, pp. 295-300, 2008.

[10] B. D. Choi and S. H. Sung, "Almost sure convergence theorems of weighted sums of random variables," Stochastic Analysis and Applications, vol. 5, no. 4, pp. 365-377,1987.

[11] J. Cuzick, "A strong law for weighted sums of i.i.d. random variables," Journal of Theoretical Probability, vol. 8, no. 3, pp. 625-641,1995.

[12] W. B. Wu, "On the strong convergence of a weighted sum," Statistics & Probability Letters, vol. 44, no. 1, pp. 19-22, 1999.

[13] S. H. Sung, "Strong laws for weighted sums of i.i.d. random variables," Statistics & Probability Letters, vol. 52, no. 4, pp. 413-419, 2001.

[14] S. H. Sung, "Strong laws for weighted sums of i.i.d. random variables. II," Bulletin of the Korean Mathematical Society, vol. 39, no. 4, pp. 607-615, 2002.

[15] C. R. Lu and Z. Y. Lin, Limit Theory for Mixing Dependent Sequences, Science Press, Beijing, China, 1997.

Copyright of Journal of Inequalities & Applications is the property of Hindawi Publishing Corporation and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use.