Wuetal. Journal of Inequalities and Applications (2015) 2015:200 DOI 10.1186/s13660-015-0717-1

O Journal of Inequalities and Applications

a SpringerOpen Journal

RESEARCH Open Access

^ CrossMark

Complete convergence and complete moment convergence for weighted sums of m-NA random variables

Yongfeng Wu1*,Tien-Chung

Correspondence: wyfwyf@126.com College of Mathematics and Computer Science, Tongling University, Tongling, 244000, China Fulllist of author information is available at the end of the article

Hu2 and Andrei Volodin3

Abstract

The authors study the complete convergence and the complete moment convergence for weighted sums of m-negatively associated (m-NA) random variables and obtain some new results. These results extend and improve the corresponding theorems of Sung (Stat. Pap. 52:447-454, 2011). In addition, we point out that an open problem presented in Sung (Stat. Pap. 54:773-781, 2013) can be solved by means of the method used in this paper.

MSC: 60F15; 62G05

Keywords: complete convergence; complete moment convergence; weighted sums; m-negatively associated random variable

1 Introduction

Let {X, Xn, n > 1} be a sequence of random variables and {a„i, 1 < i < n, n > 1} be an array of constants. Because the weighted sum^n=i aniXi play important roles in some useful linear statistics, many authors studied the strong convergence for the weighted sums. We refer the reader to Cuzick [1], Wu [2], Bai and Cheng [3], Sung [4], Chen and Gan [5], Cai [6], Wu [7], Zarei and Jabbari [8], Sung [9], Sung [10], Shen [11], Chen and Sung [12].

The concept of the complete convergence was introduced by Hsu and Robbins [13]. A sequence of random variables {Un, n > 1} is said to converge completely to a constant 0 if

Un - 01 > e) < ^ for all e > 0.

Chow [14] presented the following more general concept of the complete moment convergence. Let {Zn, n > 1} be a sequence of random variables and an > 0, bn > 0, q > 0. If

^an£{b-1|Zn| - e} + < to for some or all e >0,

ft Spri

ringer

then the above result was called the complete moment convergence. The following concept was introduced by Joag-Dev and Proschan [15].

© 2015 Wu et al. This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use,distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Definition 1.1 A finite family of random variables {Xk, 1 < k < n} is said to be negatively associated (abbreviated to NA) if for any disjoint subsets A and B of {1,2,..., n} and any real coordinate-wise nondecreasing functions f on RA and g on RB,

Covf X i e A),g(Yj, j e B)) < 0

whenever the covariance exists. An infinite family of random variables is NA if every finite subfamily is NA.

Definition 1.2 Let m > 1 be a fixed integer. A sequence of random variables {Xn, n > 1} is said to be m-negatively associated (abbreviated to m-NA) if for any n > 2 and any i1,...,in such that |ik - ij| > m for all 1 < k = j < n, Xi1,...,Xin are NA.

The concept of m-NA random variables was introduced by Hu etal. [16]. It is easily seen that this concept is a natural extension from NA random variables (wherein m = 1).

It is well known that the properties of NA random variables have been applied to the reliability theory, multivariate statistical analysis and percolation theory. Sequences of NA random variables have been an attractive research topic in the recent literature. For example, Matula [17], Su etal. [18], Shao [19], Gan and Chen [20], Fu and Zhang [21], Baek et al. [22], Chen etal. [23], Cai [6], Xing [24], Sung [10], Qin and Li [25], Wu [26]. Since NA implies m-NA, it is very significant to study the convergence properties of this wider m-NA class. However, to the best of our knowledge, besides Hu etal. [16] and Hu etal. [27], few authors discuss the convergence properties for sequences of m-NA random variables.

Cai [6] studied the complete convergence for weighted sums of identically distributed NA random variables. He obtained the following theorem.

Theorem A Let {X, Xn, n > 1} be a sequence of identically distributed NA random variables, and let {a„i, 1 < i < n, n > 1} bean array of constants satisfying

Aa = lim supAa,n < to, Aaan = V lanilaIn (1.1)

n—>to ' —'

for some 0 < a < 2. Letbn = n1Ia (log n)1lY for some y > 0. Furthermore, suppose that EX = 0 when 1 < a < 2. IfE exp(h|X|Y) < to for some h >0, then

n P| max

\ 1<m<n

^^ aniXi

> bns i < to for all e >0. (1.2)

Sung [10] improved Theorem A by replacing some much weaker moment conditions.

Theorem B Let {X, Xn, n > 1} be a sequence of identically distributed NA random variables, and let {ani, 1 < i < n, n > 1} be an array of constants satisfying (1.1) for some

0 < a < 2. Let bn = n1la (log n)1lY for some y >0. Furthermore, suppose that EX = 0 when

1 < a < 2. Then the following statements hold:

(i) If a > y, then E|X|a < to implies (1.2).

(ii) If a = y , then E|X |a log |X | < to implies (1.2).

(iii) If a < y , then E|X|Y < to implies (1.2).

The main purpose of this article is to discuss the complete convergence and the complete moment convergence for weighted sums of m-NA random variables. We shall extend Theorem B to m-NA random variables. In addition, we shall extend and improve Theorem B by obtaining a much stronger conclusion under the same conditions (see Remark 3.2).

It is worthy to point out that the open problem presented in Sung [9], see Remark 2.2, can be solved by means of the method used in this article (see Remark 3.4).

Throughout this paper, the symbol C represents positive constants whose values may change from one place to another. For a finite set A the symbol ft (A) denotes the number of elements in the set A.

2 Preliminaries

We first recall the following concept of stochastic domination, which is a slight generalization of identical distribution. An sequence of random variables {Xn, n > 1} is said to be stochastically dominated by a random variable X (write {Xn} -< X) if there exists a constant C > 0 such that

supP(|Xn| >x) < CP(|X| >x), Vx>0.

The following exponential inequality for m-NA random variables can be proved by means of Theorem 3 in Shao [19] and the proof of Lemma 2 in Hu et al. [27]. Here we omit the details.

Lemma 2.1 Let {Xn, n > 1} be a sequence of m-NA random variables with zero means and finite second moments. LetSj = Xj=i Xk andBn = ^ nk=i EX^. Then for all n > m, x >0 and a >0,

p(max |S,-1 > x) < 2mP(max |X,| > a)

\1</<n / \1</<n /

I x2 1 f mB lx/(12ma)

+ 4m exm---—}+4m{—-n—. (2.1)

Fl 8m2BnJ 14(xa + mB„) J

Remark 2.1 Since e x < (1 + x) 1 for x > 0, we get, for x >0 and a >0,

f x2 1 f 3xa 1x/(12ma) ( 3xa \-x/(12ma)

expi--2— l = exH--f < ( 1 +

8m2Bn 2mBn 2mBn

Noting that

f mBn ]x/(12ma) 4xa \-x/(12ma) 3xa \-x/(12ma)

/ \ i =1+- < 1 +

4(xa + mBn) \ \ mBnJ \ 2mB

Therefore, it follows by (2.1) that

-x/(12ma)

p( |Xk| > a) + 8m( 1+2mxh)

/ \ n ( 3xa \-x/(12ma)

P[ max |S,-< |Xk| >a) + 8m 1+--. (2.2)

\1<j<n > j-< \ 2mBn J

Now we present a Rosenthal-type inequality for maximum partial sums of m-NA random variables, which is the crucial tool in the proof of our main results.

Lemma 2.2 Let {Xn, n > 1} be a sequence ofm-NA random variables with mean zero and E\Xk |q < to for every 1 < k < n. Let Sj = Y^lk=i Xk, 1 < j < n. Then for q > 2, there exists a positive constant C depending only on q such that

q/ 2-,

E max \Sj\q < ^^E\Xk\q + ^EX2j j. (.3)

Proof Let Bn = Yn==i EX2. Noting that

E\Y\q = q/ P(\Y\>x)xq-1dx (E\Y\q < to). (2.4)

By taking a = x/(12mq) in (2.2), we have

E max \Sj\q = q f p(max \S,-\ > x)xq-1 dx

1<j<n Jo \1<j<n )

n p TO

< 2mq V / P(\Xk\ > x/(12mq))xq-1dx

fTO/ x2 Yq ,

+ 8mq / 1 + —-- xq 1 dx

V0 V 8m2qBn J

=: A + B.

By (2.4), we have A = 22q+13qmq+1qq E\Xk\q. Letting t = x2/(8m2qBn), then

B = 22+3q/2m1+qq1+q/2(Bn)q/2 / (1 + t)-^2-1 dt

/ n \ q/2

= 22+3q/2 m1+qq1+q/2B (q/2, q/2)I ^ EXU ,

/>1 /* TO

B(a, 0) = ta-1(1 - t/-1 dt = ta-1(1 + t)-(a+^ dt. oo

Letting C = max{22q+13qmq+1qq, 22+3q/2m1+qq1+q/2B(q/2, q/2)}, we can get (2.3). The proof is complete. □

Lemma 2.3 (Wang et al. [28]) Let {Xn, n > 1} be a sequence of random variables with {Xn} -< X. Then there exists a constant C such that, for allq > 0 andx > 0,

(i) E\Xk\qI(\Xk\<x) < C{E\X\qI(\X\<x)+xqP(\X\ > x)},

(ii) E\Xk\qI(\Xk\ >x) < CE\X\qI(\X\ >x).

The following lemma is very important in the proof of our result, which improves Lemma 2.2 and Lemma 2.3 of Sung [10].

Lemma 2.4 Let {ani, 1 < i < n, n > 1} bean array of constants satisfying YTi=i \ani\a < nfor some a >0. Let bn = n1/a(log n)1/v for some y >0. Then

I =: £n-1b-/J2E^niX^I{|aniX| > bn) <

n=2 i=1

CEma for a > y,

CEX" log |X| for a = y, CEXy for a < y.

Proof From Y^n=i |ani|a < n, we have

i = £ n-2(iog n)-a/YJ2 ^mXr i(ma > n(iog n)a/Y ani-)

n=2 i=1

CO n f in

< £ n-2(log n)-a/Y£ E|aniX|a I |X|a > n(log n)a/Y ¡J^ ^

n=2 i=1 i=1

< £n-2(logn)-a/YJ2EaniXrii|X| > (logn)1/Y)

n=2 i=1

< J2n-1(logn)-a/YE|X|aI(|X| > (logn)1/Y)

= J2n-1(logn)-a/YJ2Emal(logm < |X|y < log(m + 1))

n=2 m=n

= J2 E|X |a I (log m < |X|y < log(m + 1)) J^ n-1(log n)-a/Y.

m=2 n=2

Observing that

n-1(logn)-

C for a > y ,

C log log m for a = y , C(log m)1-a/Y for a < y ,

we can get

CE|X|a for a > y,

CE|X|a log |X| for a = y, CE|X|Y for a < y.

The proof of Lemma 2.4 is completed. Remark 2.2 Noting that

C n C n

£n-1^P(|aniX| > bn) <J2E^niXrI(^X > bn),

we know that Lemma 2.4 improves Lemma 2.2 and Lemma 2.3 of Sung [10]. In addition, the method used in this paper is novel and much simpler than that in Sung [10].

3 Main result

In this section, we state our main results and their proofs.

Theorem 3.1 Let {Xn, n > 1} be a sequence ofm-NA random variables with {Xn} < X, and let {ani, 1 < i < n, n > 1} be an array of constants satisfying (1.1) for some 0 < a < 2. Let bn = n1/a (log n)1/Y for some y >0. Furthermore, suppose thatEXi = 0 when 1 < a < 2. Then the following statements hold:

(i) If a > y, then E | X | a < to implies (1.2).

(ii) If a = y , then E | X | a log | X | < to implies (1.2).

(iii) If a < y , then E|X|y < to implies (1.2).

Remark 3.1 Since NA implies m-NA, Theorem 3.1 extends Theorem B. Compared with Sung [10], the proof of Theorem 3.1 is different from that of Theorem 2.1 in Sung [10].

Corollary 3.1 Let {Xn, n > 1} be a sequence ofm-NA random variables with {Xn} < X, and let {ai, 1 < i < n} be a sequence of constants satisfying

Aa = lim supAa,„ < to, Aaan = V\at\aIn

n^oo ' -<

for some 0 < a < 2. Letbn = n1/a (log n)1/Y for some y > 0. Furthermore, suppose that EXi = 0 when 1 < a < 2. Then

b-1^ aiXi ^ 0 a.s.

By a similar argument as the proof of Corollary 2.1 in Cai [6], we can prove this corollary. Here we omit the details.

Theorem 3.2 Assume that the conditions of Theorem 3.1 hold, then the following statements hold:

(i) If a > y , then E\X\a < to implies

En-Eb;1

- e > < to for all e >0.

(ii) If a = y , then E|X |a log |X | < to implies (3.1).

(iii) If a < y , then E|X|Y < to implies (3.1).

Remark 3.2 Noting that

Ln-Eb-

„-n Jo

PI b-1 max

pTO TO

n 1W b-1 max

n 1< m< n

> e + t1Ia dt

> e + t1Ia dt.

Therefore, Theorem 3.2 extends and improves Theorem B.

Proof of Theorem 3.1 Without loss of generality, we may assume that ani > 0. For fixed n > 1, let

Yni = -bnl(aniXi < -bn) + aniXil(ani Xi| < bn) + bj(aniXi > bn), Zni = (aniXi + bn)I(aniXi < -bn) + (aniXi - bn)I(aniXi > bn).

Then Yni + Zni = aniXi, and it follows by the definition of m-NA and Property 6 of Joag-Dev and Proschan [15] that {Yni, i > 1, n > 1} is sequence of m-NA random variables. Then

En-1P( max

I 1<m<n

to n TO /

< 1 + Vn-1VP(ani|Xi| > bn) + Vn-1P( max

1<m<n n=2 \ " "

n=2 i=1

=: 1 + Hi + Hn.

By {Xn} -< X and Lemma 2.4, we have

H < P(anilX| > bn) < Cj2n-%«j2aaniEIXIal(aM|X| > b,

n=2 i=1

ani x| > bn < to.

Then we prove H2 < to. Noting that either E|X|a log |X| < to for a = y, or E|X|Y < to for a < y implies E|X|a < to. From (1.1), without loss of generality, we may assume that JXi a<ili < n. We first prove

L =: b„ max

1< m< n

— 0 as n —^ to.

J2 EYni

For 0 < a < 1, by Lemma 2.3 and ^n=1 aa < n, we have

L < Cb-1^aniE|X|l(a ni |X|<bn) + Cj^Piam|X| > bn)

< Cb-/J2alEXrI(anim < bn) + Cb'/J^alEX|aI(ani|X| > bn)

< C(logn)-alyEm" — 0 as n — to.

For 1 < a < 2, by EXi = 0, |Zni| < a„i |Xi|I(ani|Xi| > bn), and Lemma 2.3, we have

L = bn-1 max n 1<m<n

< b-1^aniE|Xi|I(ani|Xi| > bn)

< Cb-1£aniE|X|I(ani |X| > bn) < Cb-a ^a^EXrI^X| > bn)

i=1 i=1

< C(logn)-alYEX^ — 0 as n — to.

Hence (3.2) holds for 0 < a < 2. Then, while n is sufficiently large,

max VEYni < bne/2. (3.3)

1<m<n '

" " i=1

Let q > max{2, 2y/a}. Then by (3.3), the Markov inequality, and Lemma 2.2, we have

H2 < > n Pi max

< c£n-1b-qE

1< m< n

^(Yni - EYni) > bne/2 m

E(Yni — EYni)

< ^n-1b-^E|Yni|M + ^n-1b-^E|Yni|q

n=2 \ i=1

=: H3 + H4.

n=2 i=1

Firstly, we prove H3 < to. By Lemma 2.3, a < 2, ^n=i |ani |a < n, and q > 2y/a, we have

to / n n \ q/2

H3 < C^n-1 b-2J2a2niE|X|2I(ani|X| < bn) + EP(a»|X| > bn)

n=2 \ i=1 i=1 /

< C^V1 b^E <iE|X|aI(ani |X| < bn) + b^ £ a^E|X|al(ani|X| > bn)

n=2 \ i=1

-Uw^-aq/n va\q/2

< C^>-1(log n)-aq (E|X|a)q/2 < TO.

Next we consider H4. By Lemma 2.3, we have

n TO n

■\qj

H4 < ^n-1b;^aniE|X|ql(a ni |X| < bn) + C^n 1EP{am|X| > bn,

n=2 i=1 n=2 i=1

=: H5 + H(,.

SimilartotheproofofH1 < to, we get directlyH6 < TO.Thenfinal workisto proveH5 < to. For j > 1 and n > 2, let

Inj = {1 < i < n : n1/a(j + 1)-1/a < |ani| < n1/aj-1/a}.

Then {Inj-, j > 1} are disjoint, U;>1 Inj = N for all n > 1 from ^nt=i |ani |a < n, where N is the set of positive integers. Noting that for all k > 1, we have

n > E|a»r = EE |a«r > E nn + 1)-1 > E ^(Ini)n(j + 1)

i=1 j=1 i&Inj j=1 j=k

= E $(Inj)n(j + 1)-q/a(j + 1)q/a-1 > E ^(Inj)n(j + 1)-q/a (k + 1)q/a-1. j=k j=k

Hence for all k > l, we have

Eö(IйJ')J-q/а < C(k + l)l-q/a. (.4)

H5 = £ й-1-"/а(^ й)-q/^\ай^Е\Х\ql(\а^Х\<йш(^ й)l/Y)

й=2 i=l

= й-1-"/а(^й)-"/у Ki\qE\X\ql(\а^Х\ < йиа (logй)l/Y)

й=2 j=l i&Inj

тото

<J2й-1-^^й)-"/у£й(1й;)й"/аJ-q/aE\X\qI(\X\ < (J + l)l/a(logй)1/у)

й=2 j=l

тото

< £й-1 (logйГ^^Ш^Ет"^!\X\ < (logй)1/у)

й=2 j=l

тото

+ £ й-1 (log й)-"/у^Ш"/а

й=2 j=l j

xJ2E\X\ql(kl/a(logй)l/Y < \X\ < (k + l)l/a(logй)1/у)

=: H| + H5**. By (3.4) and q > 2y /а > y, we have

H| < C^>-l(logй)-"/уE\X\qI(\X\Y < logй)

той

= C¿>-l(logй)-"/у ^E\X\qI(log(m -1) < \X\Y < logm)

й=2 m=2

тото

= ^^^E\X\qI (log(m -1) < \X\Y < log m)Y^ й-l(log й)-"/у

< (log m)l-q/YE\X\qI(log(m -1) < \X\Y < log

< CE\X\Y < то. By (3.4), we have

H|* = £ й-1 (log й)-"/у

тото

x J2E\X\qI(kl/a(logй)1/у < \X\ < (k + l)l/a(logй)1/у)J2 Wj)J-q/a k=l j=k

< C^n-l(log n)-qly

x J2(k + l)l-qlaE\X\qI(klla(log n)llY < |X| < (k + l)lla(log n)llY)

< C^>-l(logn)-alvJ2E\X\aI(klla(logn)llY < \X\ < (k + l)lla(logn)llY)

n=2 k=l

= C^>-l(logn)-alYE\X\a7(iXi > (logn)llY).

Noting that we obtain the following result in the proof of Lemma 2.4,

£n-l(logn)-alYE\X\aI(\X\ > (logn)llY) <

CE\X\a for a > y,

CE\X\a log \X\ for a = y, CE\X\y for a < y.

Hence we get H** < to combining the assumptions of Theorem 3.l. The proof is completed. □

Remark 3.3 It is easily seen that the proof of H5 < to complements Lemma 2.3 of Sung [9]. In fact, in that lemma Sung only proved H5 < to for the case a = y . It is worthy to point out that ani = 0 or \ani\ > l is required in Sung [9]. Here, we do not require the extra conditions.

Remark 3.4 Sung [9] proved Theorem 3.l for the case a = y when {Xn, n > l} is a sequence of p*-mixing random variables. However, he posed an open problem, that is, whether Theorem 3.l (i.e. Theorem l.l in Sung [9]) remains true for p*-mixing random variables.

The crucial tool of the proof of Theorem 3.l is the Rosenthal-type inequality for maximum partial sums of m-NA random variables. For p*-mixing random variables, the Rosenthal-type inequality for maximum partial sums also holds (see Utev and Peligrad [29]). Therefore, it is easy to solve the above open problem by following the method used in the proof of Theorem 3.l.

Proof of Theorem 3.2 For any given e > 0, we have

n-lE b

PI b- max

> e + illa dt

< n-lP( max

l< m< n

=: Il+12.

)to /. to I

+ n 1 I W max

l<m<n n=2 l < <

> bntVa\ dt

Therefore, to prove (3.1), one needs only to prove that I1 < to and I2 < TO.By Theorem 3.1, we get directly I1 < to. For all t > 1, we denote

Yni = -bntllal(aniXi < -bntlla) + aniXiI(aM < bntlla) + bntllaI(aniXi > b„t1/a),

Zni = aniXi — Yni.

TO /■TO / \

p{max>di

n=2 ^ l<'<"

TO /. TO /

+ 7 n-l / P| max

A U<m<n

= : I3 + I A'

> ¿„i17" di

Noting that

P(ani|X| > bntlla) dt < b-a a^fiXr I( |aniX | > b^, by {Xi} -< X, Lemma 2.4, and the assumptions of Theorem 3.2, we have

TO n p TO TO n p TO

I3 < ^ n-l£/ P(ani IX > ¿ni1" di n-lE P(aniX I > ¿nil/a) di

n=2 i=l l n=2 i=l l

< £n-V ^a^XI"I(ani IXI > ¿^ < TO'

n=2 i=l

Next we prove that IA < to. We first show

J = sup i-l/"¿-l max y^EYni ^ 0 as n ^to.

i>l l<m<n ¿—f i=l

For 0 < a < l, by Lemma 2.3 and ^n=l a"i < n, we have

J < Csupi-l/"¿-l VaniEIXII(aniIX| < ¿nil/a) + CsupVP(ani|X| > ¿ntVa)

i>l n i=l i>l i=l nn

< Csupy^a"iE|X|aI(ani|X| < ¿niVa) + C¿-a Ta^EX"

i=l i=l

< C(log n)-a/yE|X|" ^ 0 as n ^to.

For l < a < 2, by EXi = 0, |Zni| < a„i |Xi|I(ani |Xi| > ¿nil/a), and Lemma 2.3, we have

< Csup i-l/"aniEIXI^aniIX| > ¿nil/a)

J = sup i l/ " ¿-l max

t l n l<m<n

< Csupt-1b-aJ2aamE\X\al(ani\X\ > bntlla)

< C(log n)-alYE\X\a ^ 0 as n ^to. From (3.5), we know that, while n is sufficiently large,

J2 EYm

< bntVal2

holds uniformly for t > l. Let q > max{2, 2yla}. Then by (3.6) and Lemma 2.2, we have

I4 < > n

to /.to /

E n-1 p

< CVn-lb-q / t-qlaE max

n l l<m<n

r» TO

£(Yn< EYni) m

£(Yni - EYni)

> bntlla l2 dt

TO -TO n TO /»to / n \ '

< C^n-lb-q/ t-qlaEE\Yni\qdt + C^n-lb-q/ t-qla EE\Yni\2 dt

n=2 l i=l n=2 l i=l

=: I5 +16.

By Lemma 2.3, a < 2, and q > 2yla, we have

TO /. TO / n

I6 < C^>-lb-q / t-qla £aniE\X\2l(ani\X\ < bnilla)

n=2 l i=l

n \ ql2

+ b2nt2laJ2Ham \X\ > bntlla) dt

TO /. TO / n

< Cj^n-1 b-a t-l E aaiE\X\al(ani \X\ < bntlla)

n=2 l i=l

n \ ql2

+ b-at-^aaniE\X\al(ani\X\ > bntlla) dt

TO r TO / n \ql2

< Cj^n-1 t-ql2 b-naJ2aOEm") dt

n=2 l i=l

< C^>-l(logn)-aql(2y)(E\X\a)ql2 < TO.

For I5, we have

TO n . TO

I5 < / t-ql aaq E\X\ql(ani\X\ < bntlla) dt

n=2 i=l l

TO n /. TO

+ Cj^n-1^ P(ani \X\ > bntva) dt

n=2 i=l l

TO K . œ

= C JVV £ / t-q/aaqniE\X\qI(ani\X\ < bn) dt

n=2 i=l ^

TO n - TO

+ C £ K-lbK^ / t-q/aaq E\X\qI(bn < an \X\ < bntya) dt

n=2 i=l ^

TO n /. TO

+ p{am \X\ > bntlla) dt

n=2 i=1 ^

=: I7 + /8 + /9.

Similar to the proof of I3 < to, we getI9 < to. Similar to the proof of H5 < to, we getI7 < to. By q > 2 > a and the following standard arguments, we get

t-q/aaqniE\X\ql{bn < Uni\X\ < bntVa) dt TO pm+1

< t-q/aaqniE\X\ql(bn < ani\X\ < bntlla) dt

< b-qJ2m-q/aaqniE\X\ql{bn < ani\X\ < bn(m + l)l/a)

< b-qY^m-q/a YjaqmE\X\ql(bnSya < ani\X\ < bn(s + l)l/a)

m=1 s=1

< b-lqYjaqniE\X\ql(bnS1/a < ani\X \ < bn(s + l)l/a) £>-q/a

s=1 m=s

< Cb-q £>l-q/a aqniE\X\ql(bnsVa < ani\X\ < bn(s + l)l/a)

< Cb-a J2aaniE\X\aI{bnslla < ani\X\ < bn(s + l)l/a)

< Cb-aaaniE\X\aI(ani\X\ > bn). Hence by Lemma 2.4, we have

I8 < ^n-lb-a Y.aamE\X\aI(ani\X\ > bn) < to.

k=2 i=l

The proof is completed. □

Competing interests

The authors declare that they have no competing interests. Authors' contributions

Allauthors contributed equally to the writing of this paper. Allauthors read and approved the finalmanuscript. Author details

'College of Mathematics and Computer Science, Tongling University, Tongling, 244000, China. 2 Department of Mathematics, NationalTsing Hua University, Hsinchu, 300, Taiwan, ROC. 3 Department of Mathematics and Statistics, University of Regina, Regina, Saskatchewan S4S 0A2, Canada.

Acknowledgements

The authors are grateful to the referee for carefully reading the manuscript and for offering comments which enabled them to improve the paper. The research of Y Wu was supported by the Natural Science Foundation of Anhui Province (1308085MA03,1408085MA03), the Key Grant Project for Backup Academic Leaders ofTongling University (2014tlxyxs21) and the Key NSF of Anhui Educational Committee (KJ2014A255).

Received: 23 December 2014 Accepted: 28 May 2015 Published online: 17 June 2015 References

1. Cuzick, J: A strong law for weighted sums of i.i.d. random variables. J. Theor. Probab. 8,625-641 (1995)

2. Wu, WB: On the strong convergence of a weighted sum. Stat. Probab. Lett. 44,19-22 (1999)

3. Bai, ZD, Cheng, PE: Marcinkiewicz strong laws for linear statistics. Stat. Probab. Lett. 46,105-112 (2000)

4. Sung, SH: Strong laws for weighted sums of i.i.d. random variables. Stat. Probab. Lett. 52,413-419 (2001)

5. Chen, PY, Gan, SX: Limiting behavior of weighted sums of i.i.d. random variables. Stat. Probab. Lett. 77,1589-1599 (2007)

6. Cai, GH: Strong laws for weighted sums of NA random variables. Metrika 68,323-331 (2008)

7. Wu, QY: Complete convergence for weighted sums of sequences of negatively dependent random variables. J. Probab. Stat. 2011, Article ID 202015 (2011)

8. Zarei, H, Jabbari, H: Complete convergence of weighted sums under negative dependence. Stat. Pap. 52,413-418 (2011)

9. Sung, SH: On the strong convergence for weighted sums of p*-mixing random variables. Stat. Pap. 54, 773-781 (2013)

10. Sung, SH: On the strong convergence for weighted sums of random variables. Stat. Pap. 52,447-454 (2011)

11. Shen, AT: On the strong convergence rate for weighted sums of arrays of rowwise negatively orthant dependent random variables. Rev. R. Acad. Cienc. Exactas Ffs. Nat., Ser. A Mat. (2012). doi:10.1007/s13398-012-0067-5

12. Chen, PY, Sung, SH: On the strong convergence for weighted sums of negatively associated random variables. Stat. Probab. Lett. 92,45-52 (2014)

13. Hsu, PL, Robbins, H: Complete convergence and the law of large numbers. Proc. Natl. Acad. Sci. 33,25-31 (1947)

14. Chow, YS: On the rate of moment complete convergence of sample sums and extremes. Bull. Inst. Math. Acad. Sin. 16,177-201 (1988)

15. Joag-Dev, K, Proschan, F: Negative association of random variables with applications. Ann. Stat. 11, 286-295 (1983)

16. Hu, YJ, Ming, RX, Yang, WQ: Large deviations and moderate deviations for m-negatively associated random variables. Acta Math. Sci. 27,886-896 (2007)

17. Matula, P: A note on the almost sure convergence of sums of negatively dependent random variables. Stat. Probab. Lett. 15, 209-213 (1992)

18. Su, C, Zhao, LC, Wang, YB: Moment inequalities and weak convergence for negatively associated sequences. Sci. China Ser. A 40,172-182 (1997)

19. Shao, QM: A comparison theorem on maximum inequalities between negatively associated and independent random variables. J. Theor. Probab. 13, 343-356 (2000)

20. Gan, SX, Chen, PY: On the limiting behavior of the maximum partial sums for arrays of rowwise NA random variables. Acta Math. Sci. 27,283-290 (2007)

21. Fu, KA, Zhang, LX: Precise rates in the law of the logarithm for negatively associated random variables. Comput. Math. Appl. 54,687-698 (2007)

22. Baek, JI, Choi, IB, Niu, SL: On the complete convergence of weighted sums for arrays of negatively associated variables. J. Korean Stat. Soc. 37, 73-80 (2008)

23. Chen, PY, Hu, TC, Liu, X, Volodin, A: On complete convergence for arrays of rowwise negatively associated random variables. Theory Probab. Appl. 52,323-328 (2008)

24. Xing, GD: On the exponential inequalities for strictly stationary and negatively associated random variables. J. Stat. Plan. Inference 139,3453-3460 (2009)

25. Qin, YS, Li, YH: Empirical likelihood for linear models under negatively associated errors. J. Multivar. Anal. 102,153-163 (2011)

26. Wu, YF: Convergence properties of the maximal partial sums for arrays of rowwise NA random variables. Theory Probab. Appl. 56, 527-535 (2012)

27. Hu, TC, Chiang, CY, Taylor, RL: On complete convergence for arrays of rowwise m-negatively associated random variables. Nonlinear Anal., Theory Methods Appl. 71, e1075-e1081 (2009)

28. Wang, XJ, Li, XQ, Yang, WZ, Hu, SH: On complete convergence for arrays of rowwise weakly dependent random variables. Appl. Math. Lett. 25,1916-1920 (2012)

29. Utev, S, Peligrad, M: Maximal inequalities and an invariance principle for a class of weakly dependent random variables. J. Theor. Probab. 16,101-115 (2003)