# Complete moment convergence for maximal partial sums under NOD setupAcademic research paper on "Mathematics"

0
0
Share paper
J Inequal Appl
OECD Field of science
Keywords
{""}

## Academic research paper on topic "Complete moment convergence for maximal partial sums under NOD setup"

﻿Qiuetal. Journal of Inequalities and Applications (2015) 2015:58 DOI 10.1186/s13660-015-0577-8

O Journal of Inequalities and Applications

a SpringerOpen Journal

RESEARCH Open Access

Complete moment convergence for maximal partial sums under NOD setup

Dehua Qiu\Xiangdong Liu2* and Pingyan Chen3

Correspondence: tliuxd@jnu.edu.cn 2Department of Statistics, Jinan University, Guangzhou, 510630, P.R. China

Fulllist of author information is available at the end of the article

Abstract

The sufficient and necessary conditions of complete moment convergence for negatively orthant dependent (NOD) random variables are obtained, which improve and extend the well-known results. MSC: 60F15; 60G50

Keywords: NOD; complete convergence; complete moment convergence

ft Spri

1 Introduction

The concept of complete convergence of a sequence of random variables was introduced by Hsu and Robbins [1] as follows. A sequence {Un, n > 1} of random variables converges completely to the constant 0 if

J2p{\Un - 01 > e) < X for all e > 0.

Moreover, they proved that the sequence of arithmetic means of independent and identically distributed (i.i.d.) random variables converges completely to the expected value if the variance of the summands is finite. This result has been generalized and extended in several directions, one can refer to [2-13] and so forth.

When {Xn, n > 1} is a sequence of i.i.d. random variables with mean zero, Chow [14] first investigated the complete moment convergence, which is more exact than complete convergence. He obtained the following result.

Theorem A Let {Xn, n > 1} be a sequence of i.i.d. random variables with EX\ = 0. For 1 <p <2 and r >1, ifE{\X1\rp + jXxj log(1 + X1I)} < to, then

- en1/p > < X for all e >0,

ringer

where (as in the following) x+ = max{0, x}.

Theorem A has been generalized and extended in several directions. One can refer to Wang and Su [15] and Chen [16] for random elements taking values in a Banach space, Wang and Zhao [17] for NA random variables, Chen et al. [5], Li and Zhang [18] for moving-average processes based on NA random variables, Chen and Wang [19] for y-

© 2015 Qiu et al.; licensee Springer. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the originalworkis properly credited.

mixing random variables, Qiu and Chen [20] for weighted sums of arrays of rowwise NA random variables.

The aim of this paper is to extend and improve Theorem A to negatively orthant dependent (NOD) random variables. The sufficient and necessary conditions are obtained. In fact, the paper is the continued work of Qiu et al. [11] in which the complete convergence is obtained for NOD sequence. It is worth to point that Sung [21] has discussed the complete moment convergence for NOD, but the main result in our paper is more exact and the method is completely different.

The concepts of negatively associated (NA) and negatively orthant dependent (NOD) were introduced by Joag-Dev and Proschan [22] in the following way.

Definition 1.1 A finite family of random variables {Xi, 1 < i < n} is said to be negatively associated (NA) if for every pair of disjoint nonempty subset A1, A2 of {1,2,..., n},

Covfi(Xi, i e AfXj, j e A2)) < 0,

wheref and f2 are coordinatewise nondecreasing such that the covariance exists. An infinite sequence of {Xn, n > 1} is NA if every finite subfamily is NA.

Definition 1.2 A finite family of random variables {Xi, 1 < i < n} is said to be

(a) negatively upper orthant dependent (NUOD) if

P(Xi > xi, i = 1,2, ...,n) <["[ P(Xi > xi) i=1

Vxi,x2,...,xn eR,

(b) negatively lower orthant dependent (NLOD) if

P(Xi < xi, i = 1,2, ...,n) <Y[ P(Xi < xi)

Vx1,x2,...,xn eR,

(c) negatively orthant dependent (NOD) if they are both NUOD and NLOD.

A sequence of random variables {Xn, n > 1} is said to be NOD if for each n, X1, X2,..., Xn are NOD.

Obviously, every sequence of independent random variables is NOD. Joag-Dev and Proschan [22] pointed out NA implies NOD, neither NUOD nor NLOD implies being NA. They gave an example which possesses NOD, but does not possess NA. So we can see that NOD is strictly wider than NA. For more convergence properties about NOD random variables, one can refer to [2,11, 20, 23-26], and so forth. In order to prove our main result, we need the following lemmas.

Lemma 1.1 (Bozorgnia et al. [23]) LetX1,X2, ...,Xn be NOD random variables.

(i) Iff1,f2,...,fn are Borelfunctions all of which are monotone increasing (or all monotone decreasing), thenf1(X1),f2(X2),... ,fn(Xn) are NOD random variables.

(ii) E nn=1(Xi)+ < nn=1 E(Xi)+, Vn > 2.

Lemma 1.2 (Asadian et al. [27]) For any v > 2, there is a positive constant C(v) depending only on v such that if {Xn, n > 1} is a sequence of NOD random variables with EXn = Ofor every n > 1, then for all n > 1,

< C(v) EE\X\ + £EX2

i=1 \ i=1

We reason by Lemma 1.2 and a similar argument to Theorem 2.3.1 of Stout [28].

Lemma 1.3 For any v > 2, there is a positive constant C(v) depending only on v such that if {Xn, n > 1} is a sequence of NOD random variables with EXn = Ofor every n > 1, then for all n > 1,

YJE\Xi\v +(£ EX?

i=1 V i=1

where log x = max{1, ln x}, and ln x denotes the natural logarithm ofx.

Lemma 1.4 (Kuczmaszewska [8]) Let j be positive constant, {Xn, n > 1} be a sequence of random variables and X be a random variable. Suppose that

J2P(\Xi\> x) < DnP(\X\ > x), Vx >0,Vn > 1, (1.1)

holds for some D >0, then there exists a constant C >0 depending only on D and j such that

(i) IfE\X\j < ro, then n E;=1 E\Xj\j < CE\X\j;

(ii) n E;=1 E\Xj\jI(\Xj\<x) < C{E\X\jI(\X\ <x)+xjP(\X\ > x)};

(iii) n E;=1 E\Xj\jI(\Xj\ >x) < CE\X\jI(\X\ > x).

Throughout this paper, C will represent positive constants; their value may change from one place to another.

2 Main results and proofs

Theorem 2.1 Let y > 0, a >1/2, p > 0, ap >1. Let {Xn, n > 1} be a sequence of NOD random variables and X be a random variables possibly defined on a different space satisfying the condition (1.1). Moreover, assume thatEXn = 0for alln > 1 in the case a < 1. Suppose that

E\X\p < ro, y < p,

E \ X \p log(1+ \ X \ )<ro, y = p, (2.1)

E\X\ y < ro, y > p.

Then the following statements hold:

Vna(p-Y)-2^ max \Sk\ -enaY < ro, Vs > 0, (2.2)

^ U<k<n J +

Yna(p-y)-2e| max |S<k)| -enaY < to, Ve > 0, (2.3)

t—1 U<k<n' n 1 J +

Vna(p-Y)-2^ max Xk I - enaY < to, Ve > 0, (2.4)

t—1 li<k<n J +

sup k-a \Sk \ - < TO, Ve >0, (2.5)

lk>n J +

J2n"p-2E{supk-a \Xk\ - < to, Ve > 0, (2.6)

n=1 k>n

where Sn = Y, n=i X, S(!) = Sn - Xk, k = 1,2,...,n. Proof Firstly, we prove (2.2). Note that for all e >0

Vna(p-y)-2e| max \Sk\ -enaY

1<k<n +

TO /"TO / \

= y na(p-y)-2 / P( max \Sk\ - ena > tu Y ) dt n=1 J0 V1<k<n /

TO /.nY«

= Y" na(p-Y)-2 p( max \Sk\ - ena > f/Y )dt

^ l0 \1<k<n >

n=1 J0 --

TO fTO / \

+ y na(p-^-2 P( max \Sk\ - ena > t1/Y ) dt

JnY« ^1<k<n )

<Vnap-2p( max \Sk\ > ena\

TO fTO / \

+ y na(p-^-2 P( max \Sk\ > t1/ Y ) dt.

JnY« ^1<k<n )

Hence by Theorem 2.1 of Qiu et al. [11], in order to prove (2.2), it is enough to show that

TO fTO / \

Yna(p-Y)-2 / P{ max \Sk\ > t1/ Y )dt < to. I JnYa W<n >

Choose q such that 1/(ap) < q < 1. Vj > 1, t >0, let

X;(t,1) = -tq/ Yl(Xj < -tq/ Y) + X/(\Xj\ < tq/Y) + tq/Yl(Xj > tq/ Y), xf,2) = (Xj - tq/Y)I(tq/Y < Xj < tq/Y + t1/ Y) + t1/ Yl(Xj > tq/Y + t1/ Y), X;(t,3) = (Xj - tq/Y -11/ Y )I(Xj > tq/Y + t1/ Y),

Xf,4) = (Xj + tq/ Y )I (-tq/ Y - t1 Y < Xj < -tq/ Y) - t1/YI (Xj < -tq/ Y - t1 Y), X;(t,5) = (Xj + tq/Y + )I(Xj < -tq/ Y - t1/Y),

then Xj = ^ Xf,l). Note that

Yna(p-Y)-2 / P{ max \ Sk\ > t1/Y ) dt

JnYa \1<k<n )

ro pro {

<yna(p-Y)-2 / P max

JnYa y1<k<n 3 ro /.ro / n

EE »a"-Y)-2 i HEX"

> t1/Y/5) dt

/=2 n=1

+ EE n

l=4 n=1

> t1/Y/5 dt

a(p-Y )-2

/ P -£ j

X,(t,l) > t1/Y/5 I dt

= E Il.

Therefore to prove (2.2), it suffices to show that Ii < ro for l = 1,2,3,4,5. For I1, we first prove that

sup t 1/Y max

t>nYa 1<k<n

— 0, n —^ ro.

When a < 1. Since ap >1 implies p > 1, by Lemma 1.4 and EXj = 0, j > 1, we have

sup t 1/Y max

t>nYa 1<k<n

< sup t-1/Y ^E{ \Xj\I( \Xj\ > tq/Y) + tq/YI( \Xj \ > tq/Y)}

< 2 sup t-1/Y ^ E\Xj \I( \Xj \ > tq/Y)

< Cn sup t-1/YE\X\I( \ X \ > tq/Y)

< Cn1-"E\X\I( \ X \ > naq)

< Cn1-apq-a(1-q)E\X\p — 0, n — ro.

When a >1 and p > 1.

sup t 1/Y max

t>nYa 1<k<n

< sup t-1/Y ^E \ Xj\ < Cn sup t-1/YE\X\ < Cn1-a — 0, n — ro.

When a >1 and p <1,

sup t 1Y max

t>nYa 1<k<n

< sup t-i/Y J2E{xii(ix/i <tq/y)+tq/ yi(\Xj\ >tq/Y)}

t>n Y a

< sup t-1/Y J2 tq(1-p)/YE|Xj|p < Cn sup tiq(1-p)-1)/ YE|X|p

< Cn1-apq-(1-q)a ^ 0, n ^to.

Therefore (2.7) holds. By (2.7), in order to prove I1 < to, it is enough to show that

TO /.to /

I* :=Y" na(p-Y)-2 P max

i JnYa \l£k<n

- EXf1)

> t1/ Y/10 dt < to.

Fix any v > 2 and v > max{p/(l- q), y/(l- q),2y/[2-(2 -p)q],2(ap -l}/[2a(l- q) + (apq -1)], (ap - l)/(a - l/2)}, by Markov's inequality, Lemma l.l, Lemma l.3, and Cr-inequality, we have

I* < C ^ na(p-Y)-2

xjT t-v/Y(log(4n))v{]CE|X,(t'1)r + (¿E^)2)

< Cj2na(p-y)-2 (log(4n))v

/.TO n

x / t-v/Y V{E|Xj|vl(|Xj| < tq) + tqv/YP(|Xj| > tq)}dt JnYa j=i

+ Cj2na(p-y)-2 (log(4n))v

x y tYj ^](EX2I(|Xj| < tq ) + t Y p(|Xj| > tq )) 1 dt

def . .

= Ill + Il2.

Note that

TO /.TO

Ill < C Vna(p-Y)-1(log(4n))v t-(1-q)v/ Y dt

n=1 n a

< C^Vp-a(:L-q)v-:L(log(4n))V < to.

If max{p, y } < 2, by Lemma 1.4, we have

TO „to ( n \ 2

I12 < Cy^na{p-Y)-2(log(4n))" tY t(2-p)qlY VE| X. |p dt n=1 •/nYa l .=1 J

< C V na(p-Y)-2+vl2(log(4n))V /"TO t(E |X |p)2 dt

n=1 •/nYa

< C^nap-2-[a(1-q)+(apq-1)l2|v(log(4n))V < TO.

If max{p, y}> 2, note that E|X|2 < to, by Lemma 1.4, we have

/12 < C V na(p-Y)-2+vl2(log(4n))v t-vlY dt

< cJ2nap-2-(a-V2)v(log(4n))v < TO.

Therefore, If < to, so I1 < to. For I2, we first prove

sup t-1/Y VEX(t,2) — O, n —^ to. t>nr«\ jx j I

When p > 1, we have by Lemma 1.4 that sup ft1/Y ±EXjt,2)]

< sup t-1/Y ^ {EXjI(Xj > tq/Y) + t1/YP(Xj > tq/Y + t1/Y)}

< sup t-1/Y ^ {EXjl(Xj > tq/Y) + EXjl(Xj > tq/Y + t1/Y)}

< Cn sup t-1/YE\X\I(\X\ > t^) < Cn1-aE\X\I(\X\ > nq

< Cn1-q^p-a(1-q)E\X\p — O, n — to. When O < p < 1, we have by Lemma 1.4

sup (t-1/Y ¿EX^l

< sup t-1/Y ^{E\Xj\I(\Xj\ < 2t1/Y) + t1/YP(\Xj\ > 2tq/Y)}

< Cn sup t-1/Y{E\X\I(\X\ < 2t1/Y) + 2t1/YP(\X\ > 2t1/Y) + t1/YP(\X\ > 2tq/Y)}

< Cn sup {t-p/YE\X\p + t-pq/YE\X\p}

< Cn1-apq — O, n — to.

Therefore (2.8) holds. By (2.8), in order to prove I2 < ro, it is enough to show that

ro /.ro { n \

I* := £ na(p-Y )-2J p(j2(Xf,2) - EXf2)) > t1/Y/10 1 dt < ro.

Fix any v > 2 (to be specified later), by Markov's inequality, Lemma 1.1, Lemma 1.2, Cr inequality, Jensen's inequality, and Lemma 1.4, we have

If < c£

a(p-Y )-2

(t,2) 2

a(p-Y )-2

+ cT,n

a(p-Y )-2

+ cT,n

a(p-Y )-1

/•to f n I n

jnya t^jEE\Xf»\v ^

/.TO n

/ t-v/Y V{E\Xj\vI(\Xj\ < 2t1/Y) + tv/YP(Xj > t1/Y)} dt

Jny j=1

I" tv/Y^y itE(XjI (\Xj\ < 2t1/Y) + t2/Y P(Xj > t1/Y)) J

a(p-Y)-W t-v/YE\X\vI(\X\ < 2t1/Y) dt

/ P(\X\ > t1/Y) dt

/ t-v/Y{E\X\2I(\X\< 2t1/Y)}v/2 dt

/ (P(\X\ > t1/Y))v/2 dt JnY"

a(p-Y )-2+v/2

a(p-Y )-2+v/2

= I21 + ^22 + I23 + I24.

We get by the mean-value theorem and a standard computation

ro ro Aj+1)Ya

I22 = C^na(p-Y)-1£ / P(\X\ > t1^) dt

n=1 j=n j

< C£>a(p-YjYa-1P(\X\ > ja)

= cj2jya-1p{\x\ > ja)j2

> f)Y na(p-Y )-1

C^ jp-1P(\x\ > j), y <p,

CEjTO jap-1 logjP(\X\ > ja), Y = p, ETO1 jYa-1P(\X\ > ), Y >p

CE\X\P, y <p,

CE\X\p log(1 + \X\), y = P, CE\X\Y, y >P

When max{p, y} < 2, let v =2. We have I24 = I22 < to and

TO TO f(j+1)y a I21 = I23 = C^n^Y)-1^ t-2/YE\X\2I(\X\ < 2t1/Y)dt

< C na(p-Y)-1 ^j(-2+Y)a-1E\X\2I(\X\ < 2(j + 1))

= Cj2j(-2+Y)a-1E\X\2I(\X\ < 2j + 1) ) ^ na(p-Y)-1

' ja(p-2)-1E\X\2I(\X\ < 2(j + 1)), Y <p, C ETO1 ja(p-2)-1 logjE\X\21(\X\ < 2j + 1)), y = P, C jja(y-2)-1E\X\2I(\X\ < 2j + 1)), Y >P

CE\X\P, y <P,

CE\X \P log(1 + \X\), y = P,

CE\X\Y, y >P < to. (.9)

When max{P, y} > 2, let v > max{ y, («P - 1)/(a -1/2)}. Note that E\X\2 < to

TO „TO TO

123 < C V na(P-Y)-2+v/2 t-v/Y dt = C y nap-2-(a-1/2)v < TO,

and by the Markov inequality, we have

TO -TO

124 < C y na(P- Y)-2+v/M t-v/ Y dt < TO.

The proof of I21 < to is similar to that (2.9), so it is omitted. Therefore, I^ < to, so I2 < to. For I3, we get

I3 <£

u(p-y)-2

u(p-y)-2

C iy (X<1)>0)|dt

/ y>(Xj > t1/Y + tq/ Y )dt JnY" -_i

TO />TO

< Cy^na(P-Y)-W P(\X\ > t1/Y)dt = CI22 < TO.

„-1 a

By similar proofs to I2 < to and I3 < to, we have I4 < to and I5 < to, respectively. Therefore, (2.2) holds.

Equation (2.2) ^ (2.3). Note that |5(f)| = |5„ - Xk| < |S„| + \Xk| = |S„| + |Sk - Sk-i| < |Sn| + |Sk| + |Sk-1| < 3 max1</-<„ |Sj|, VI < k < n, hence

Yna(p-Y)-2e| max |S<k)| -snaY

^ li<k<n n 1 J +

n=1 --

TO CTO / \

= J2 na(p-Y)-2 / p( max snk) 1 - sna > t1/Y j dt

n=1 1<k<n

TO /-TO

na(p-Y)-2 / P( max |Sn| > sna/3 + /3) dt

n=1 0 1<k<n n

TO /-TO

= 3Yy na(p-Y)-2 / P( max |Sn| > sna/3 + t1/Y) dt

0 1<k<n

n=1 0 1 k n

= 3Yy na(p-Y)-2e| max |Sn| - ena/3}Y < to. (2.10)

Equation (2.3) holds.

Equation (2.3) ^ (2.4). Since 2 |Sn|< 2-1 |Sn| = | \ EL1 ^nk) | < max1<k<n |Snk)|, Vn > 2, and |Xk | = |Sn - Snk)| < |Sn| + |Snk)| < 3 max1ik<n |S(k) |, we have (2.4) by a similar argument to (2.10).

Equation (2.2) ^ (2.5). The proof of (2.2) ^ (2.5) is similar to that (1.6) ^ (1.7) of Chen and Wang [19], so it is omitted.

Equation (2.5) ^ (2.6). Since k-a X| = k-a |Sk - Sk-1| < k-a(|Sk| + |Sk-1|) < sup;>k;-a x (|Sj| + |S,_1|) < 2 sup/>k_1 j-a |Sj|, Vk > 2, we have (2.6) by the similar argument of (2.10). □

Theorem 2.2 Let y > 0, a > 1/2, p > 0, ap >1. Let {Xn, n > 1} be a sequence of NOD random variables and X be a random variables possibly defined on a different space. Moreover, assume that EXn = 0 for all n > 1 when a < 1. If there exist constants D1 > 0 and D2 > 0 such that

D 2n-1 D 2n-1

— J2P(|Xi| >X < p(X| >x) < —J2P(|Xi| >x), Vx > 0,n > 1.

i=n i=n

Then (2.1)-(2.6) are equivalent.

Proof By Theorem 2.1, in order to prove Theorem 2.2, it is enough to show that (2.4) ^ (2.1) and (2.6) ^ (2.1). We only prove (2.4) ^ (2.1), the proof of (2.6) ^ (2.1) is similar and omitted. Note that

TO > y na(p-Y)-2e| max |Xk| - enaY

1<k<n +

TO fTO

= Y"na(p-Y)-2 / p{ max |Xk| - sna > t1/Y) dt

0 1<k<n k

>7" na(p-y)-2 P( max \Xk| > £na + t11 Y ) dt

^ Jo Vl<k <n /

> yaJ^ nap-2p( max \Xk\ > 2sna)

\l<k<n /

by Theorem 2.2 of Qiu et al. [ll], the proof of (2.4) ^ (2.l) is completed. □

Competing interests

The authors declare that they have no competing interests. Authors' contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript. Author details

1 School of Mathematics and Statistics, Guangdong University of Finance and Economics, Guangzhou, 510320, P.R. China. 2Department of Statistics, Jinan University, Guangzhou, 510630, P.R. China. 3 Department of Mathematics, Jinan University, Guangzhou, 510630, P.R. China.

Acknowledgements

The authors would like to thank the referees and the editors for the helpful comments and suggestions. The work of Qiu was supported by the National Natural Science Foundation of China (Grant No. 61300204), the work of Liu was supported by the National Natural Science Foundation of China (Grant No. 71471075), the work of Chen was supported by the National Natural Science Foundation of China (Grant No. 11271161).

Received: 3 September 2014 Accepted: 26 January 2015 Published online: 19 February 2015

References

Hsu, P, Robbins, H: Complete convergence and the law of large numbers. Proc. Natl. Acad. Sci. USA 33,25-31 (1947) Baek, J, Park, ST: Convergence of weighted sums for arrays of negatively dependent random variables and its applications. J. Stat. Plan. Inference 140,2461-2469 (2010)

Bai, ZD, Su, C: The complete convergence for partial sums of i.i.d. random variables. Sci. China Ser. A 5,399-412 (1985) Baum, IE, Katz, M: Convergence rates in the law of large numbers. Trans. Am. Math. Soc. 120,108-123 (1965) Chen, P, Hu, TC, Volodin, A: Limiting behavior of moving average processes under negative association. Teor. Imovir. Mat. Stat. 77,154-166 (2007)

Chen, P, Wang, D: Convergence rates for probabilities of moderate deviations for moving average processes. Acta Math. Sin. Engl. Ser. 24,611-622 (2008)

Gut, A: Complete convergence for arrays. Period. Math. Hung. 25,51-75 (1992)

Kuczmaszewska, A: On complete convergence in Marcinkiewica-Zygmund type SLLN for negatively associated random variables. Acta Math. Hung. 128(1-2), 116-130 (2010)

Liang, H, Wang, L: Convergence rates in the law of large numbers for fi-valued random elements. Acta Math. Sci. 21B, 229-236(2001)

Peligrad, M, Gut, A: Almost-sure results for a class of dependent random variables. J. Theor. Probab. 12,87-104 (1999) Qiu, D, Wu, Q, Chen, P: Complete convergence for negatively orthant dependent random variables. J. Inequal. Appl. 2014, 145 (2014)

Sung, SH: Complete convergence for weighted sums of random variables. Stat. Probab. Lett. 77,303-311 (2007) Zhang, L, Wang, J: A note on complete convergence of pairwise NQD random sequences. Appl. Math. J. Chin. Univ. Ser. A 19, 203-208 (2004) (in Chinese)

Chow, YS: On the rate of moment complete convergence of samples sums and extremes. Bull. Inst. Math. Acad. Sin. 16,177-201 (1988)

Wang, D, Su, C: Moment complete convergence for B-valued i.i.d. random elements sequence. Acta Math. Appl. Sin. 27,440-448 (2004) (in Chinese)

Chen, P: Complete moment convergence for sequences of independent random elements in Banach spaces. Stoch. Anal. Appl. 24, 999-1010(2006)

Wang, D, Zhao, W: Moment complete convergence for sums of a sequence of NA random variables. Appl. Math. J. Chin. Univ. Ser. A 21,445-450 (2006) (in Chinese)

Li, Y, Zhang, L: Complete moment convergence of moving average processes under dependence assumptions. Stat. Probab. Lett. 70,191-197 (2004)

Chen, P, Wang, D: Complete moment convergence for sequence of identically distributed y-mixing random variables. Acta Math. Sin. Engl. Ser. 26,679-690 (2010)

Qiu, D, Chen, P: Complete moment convergence for weighted sums of arrays of rowwise NA random variables. J. Math. Res. Appl. 32, 723-734 (2012)

Sung, SH: Complete qth moment convergence for arrays of random variables. J. Inequal. Appl. 2013, 24 (2013). doi:10.1186/1029-242X-2013-24

Joag-Dev, K, Proschan, F: Negative association of random variables with applications. Ann. Stat. 11, 286-295 (1983) Bozorgnia, A, Patterson, RF, Taylor, RL: Limit theorems for dependent random variables. In: Proc. of the First World Congress of Nonlinear Analysts 92(11), pp. 1639-1650. deGruyter, Berlin (1965)

24. Ko, MH, Han, KH, Kim, TS: Strong laws of large numbers for weighted sums of negatively dependent random variables. J. Korean Math. Soc. 43,1325-1338 (2006)

25. Ko, MH, Kim, TS: Almost sure convergence for weighted sums of negatively dependent random variables. J. Korean Math. Soc. 42, 949-957 (2005)

26. Taylor, RL, Patterson, R, Bozorgnia, A: A strong law of large numbers for arrays of rowwise negatively dependent random variables. Stoch. Anal. Appl. 20,643-656 (2002)

27. Asadian, N, Fakoor, V, Bozorgnia, A: Rosenthal's type inequalities for negatively orthant dependent random variables. J. Iran. Stat. Soc. 5(1-2), 69-75 (2006)

28. Stout, WF: Almost Sure Convergence. Academic Press, New York (1974)

Submit your manuscript to a SpringerOpen journal and benefit from:

► Convenient online submission

► Rigorous peer review

► Immediate publication on acceptance

► Open access: articles freely available online

► High visibility within the field