Scholarly article on topic 'Complete convergence for negatively orthant dependent random variables'

Complete convergence for negatively orthant dependent random variables Academic research paper on "Mathematics"

0
0
Share paper
Academic journal
J Inequal Appl
OECD Field of science
Keywords
{""}

Academic research paper on topic "Complete convergence for negatively orthant dependent random variables"

0 Journal of Inequalities and Applications

a SpringerOpen Journal

RESEARCH

Open Access

Complete convergence for negatively orthant dependent random variables

Dehua Qiu1*, Qunying Wu2 and Pingyan Chen3

Correspondence: qiudhua@sina.com 'SchoolofMathematicsand Statistics, Guangdong University of Finance and Economics, Guangzhou, 510320, P.R. China Full list ofauthor information is available at the end of the article

Abstract

In this paper, necessary and sufficient conditions of the complete convergence are obtained for the maximum partial sums of negatively orthant dependent (NOD) random variables. The results extend and improve those in Kuczmaszewska (Acta Math. Hung. 128(1-2):116-130,2010) for negatively associated (NA) random variables. MSC: 60F15; 60G50

Keywords: NOD; complete convergence

1 Introduction

The concept of complete convergence for a sequence of random variables was introduced by Hsu and Robbins [1] as follows. A sequence {Un, n > 1} of random variables converges completely to the constant 0 if

- 01 > s) < to for all s > 0.

Moreover, they proved that the sequence of arithmetic means of independent identically distribution (i.i.d.) random variables converges completely to the expected value if the variance of the summands is finite. This result has been generalized and extended in several directions by many authors. One can refer to [2-16], and so forth. Kuczmaszewska [8] proved the following result.

ft Springer

Theorem A Let {Xn, n > 1} be a sequence of negatively associated (NA) random variables and X be a random variables possibly defined on a different space satisfying the condition

P(|Xi| >x) = DP(|X| >x)

for all x > 0, all n > 1 and some positive constant D. Let ap >1 and a > 1/2. Moreover, additionally assume thatEXn = 0for alln > 1 ifp > 1. Then the following statements are equivalent:

(i) E|X|p < to,

(ii) ETO=1 «"p-2P(max1<i<„ | £{=! Xi| > sna) < to, Ve > 0.

The aim of this paper is to extend and improve Theorem A to negatively orthant dependent (NOD) random variables. The tool in the proof of Theorem A is the Rosenthal

©2014 Qiu et al.; licensee Springer. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the originalworkis properly cited.

maximal inequality for NA sequence (cf. [17]), but no one established the kind of maximal inequality for NOD sequence. So the truncated method is different and the proofs of our main results are more complicated and difficult.

The concept of negatively associated (NA) and negatively orthant dependent (NOD) was introduced by Joag-Dev and Proschan [18] in the following way.

Definition 1.1 A finite family of random variables {Xi,1 < i < n} is said to be negatively associated (NA) if for every pair of disjoint nonempty subset A1, A2 of {1,2,..., n},

|Covfi(X;, i e AfXj, j e A2)) | < 0,

where f and f2 are coordinatewise nondecreasing such that the covariance exists. An infinite sequence of {Xn, n > 1} is NA if every finite subfamily is NA.

Definition 1.2 A finite family of random variables {Xi,1 < i < n} is said to be (a) negatively upper orthant dependent (NUOD) if

P(Xi > xi, i = 1,2,...,«) <nP(Xi > xi) i=l

forVx1,x2,...,x« eR, (b) negatively lower orthant dependent (NLOD) if

P(Xi < xi, i = 1,2,...,«) <Y[ P(Xi < xi)

for Vxi,x2,...,xn e R, (c) negatively orthant dependent (NOD) if they are both NUOD and NLOD. A sequence of random variables {Xn, n > 1} is said to be NOD if for each n, X1, X2,...,Xn are NOD.

Obviously, every sequence of independent random variables is NOD. Joag-Dev and Proschan [18] pointed out that NA implies NOD, neither being NUOD nor being NLOD implies being NA. They gave an example that possesses NOD, but does not possess NA, which shows that NOD is strictly wider than NA. For more details of NOD random variables, one can refer to [3, 6,11,14,19-21], and so forth.

In order to prove our main results, we need the following lemmas.

Lemma 1.1 (Bozorgnia et al. [19]) LetX1,X2,...,Xn be NOD random variables.

(i) Iff1,f2,...,fn are Borelfunctions all of which are monotone increasing (or all monotone decreasing), thenf1(X1),f2(X2), ...,fn(Xn) are NOD random variables.

(ii) mh X+ <nn=1 EX+, Vn > 2.

Lemma 1.2 (Asadian et al. [22]) For any q > 2, there is a positive constant C(q) depending only on q such that if {Xn, n > 1} is a sequence of NOD random variables with EXn = Ofor

every n > 1, then for all n > 1,

< C(q) £E\Xi\q + £EX

n \q/2)

.\q .. I \ cv2

Lemma 1.3 For any q > 2, there is a positive constant C(q) depending only on q such that if {Xn, n > 1} is a sequence of NOD random variables with EXn = 0for every n > 1, then for all n > 1,

n \i/2'

< C(q)(log(4n))qr^E\X\q + (J2EX

Proof By Lemma 1.2, the proof is similar to that of Theorem 2.3.1 in Stout [23], so it is omitted here. □

Lemma 1.4 (Kuczmaszewska [8]) Let j, y be positive constants. Suppose that {Xn, n > 1} is a sequence of random variables andX is a random variable. There exists constant D > 0 such that

£>(Xi\> x) < DnP(\X\ > x), Vx > 0, Vn > 1; (1.1)

(i) ifE\X\j < ro, then 1J2J=1 E\Xj\j < CE\X\j;

(ii) 1 EmE\Xj\jI(\Xj\ < y) < C{E\X\jI(\X\ < y) + YjP(\X\ > y)};

(iii) 1T.UE\Xj\jI(\Xj\ > y) < CE\X\jI(\X\ > y).

Recall that a function h(x) is said to be slowly varying at infinity if it is real valued, positive, and measurable on [0, ro), and if for each X >0

Um -UT = 1.

x^ro h(x)

We refer to Seneta [24] for other equivalent definitions and for a detailed and comprehensive study of properties of slowly varying functions. We frequently use the following properties of slowly varying functions (cf. Seneta [24]).

Lemma 1.5 Ifh(x) is a function slowly varying at infinity, then for any s >0

Cm-sh(n) <J2 i-1-sh(i) < C2n-sh(n)

C3nsh(n) <J2 i-1+sh(i) < C4nsh(n),

where C1, C2, C3, C4 > 0 depend only on s.

Throughout this paper, C will represent positive constants of which the value may change from one place to another.

2 Main results and proofs

Theorem 2.1 Let a > 1/2, p > 0, ap >1 andh(x) be a slowly varying function at infinity. Let {Xn, n > 1} be a sequence of NOD random variables and X be a random variables possibly defined on a different space satisfying the condition (1.1). Moreover, additionally assume that for a < 1, EXn = 0for all n > 1. If

E\X|ph(|X|1/a) < to, (2.1)

then the following statements hold:

(i) J2 nap-2h(n)P^max |S;|>ena) < to, Ve > 0; (2.2)

(ii) j2 nap-2h(n)p( max Sk)| > ena) < to, Ve > 0; (2.3)

n=2 l<k<n

(iii) j2 nap-2h(n)p(^max |X;|>ena) < to, Ve > 0; (2.4)

n=1 <<<

(iv) J2 nap-2h(n)p(sup ja |Sj| > e) < to, Ve > 0; (2.5)

n=1 >>n

(v) J2nap-2h(n)p(supj a \Xj\ > e) < TO, Ve > 0. (2.6)

Here Sn = £tiX, S(nk) = Sn - Xk, k = 1,2,..., n.

Proof First, we prove (2.2). Choose q such that 1/ap < q <1. Let X(n,1) = -naqI(Xi < -naq) + XiI(X < naq) + naqI(Xi > naq), X -naq), Vn > 1,1 < i < n. Note that

XiI(|Xi| < naq) + naqI(Xi > naq), X(n,2) = (Xi - naq)I(Xi > naq), X(n,3) = -(Xi + naq)I{Xt <

X■ = X(n,1) + X*n,2) - X(n,3)

1<j'<n

Y"nap-2h(n)p(max |Sj| > ena

1< j< n

| i=1 |

^Hap-2h(«)^^lf'2) > ena/3 ) + ^> ena/3

<J2nap-2h(n)p[ max

n=1 \l£j'£n

>ena/3

+ ^n"F-2h(n)p\ 2_^X(n,2) > ena/3 + Enap-2 ^ ^(n,3)

n=1 \ i=1 / n=1 \ i=1 /

d^f I1+ I2+ I3. ()

In order to prove (2.2), it suffices to show that Ii < to for l = 1,2,3. Obviously, for 0 < n < p, the condition (2.1) implies E|X|p-n < to. Therefore, we choose 0 < n <p, a(p - n)> a(p -

n)q > 1 and p - n -1>0ifp >1. In order to prove I1 < ro, we first prove that

lim n a max

n—ro 1<j<n

This holds when a < 1. Since ap > 1, p > 1. By EXi = 0, i > 1, and Lemma 1.4, we have

< n-ama<x£{E\Xi\^\Xi\ > naq) + naqP(\Xi\ > naq)}

< 2n-"J2E\Xi\I(\Xi\> naq) < Cn1-aE\X\I(\X\ > naq)

< Cn-{a(p-n)q-1}-a(1-q)E\x\p-n

— 0, n —^ ro.

When a > 1, p >1,

1<j'<n

< n^max^]{E\Xi\^\Xi\<naq) + naqP(\Xi\ > naq)}

< E\Xi\ < Cn1-aE\X\

— 0, n — ro.

When a > 1,p < 1,

1<j'<n

< n'a<<xJ2{E\Xi\I(\Xi\ < naq) + naqP(\Xi\> naq)}

< n-^{E\Xi\I(\Xi\<naq) + naqP(\Xi\ > naq)]

< n-^(na(1-p+n)qE\Xi\p-n)

< Cn-{a(p-n)q-1}-a(1-q)E\x\p-n — 0, n — ro.

Therefore, (2.8) holds. So, in order to prove I1 < ro, it is enough to prove that

I* := y nap-2h(n)P\ max

—' \ 1<j<n

E(X<n,1) - E^

> ena/6 < ro.

By Lemma 1.1 for Vn > 1, {X(n,1) - EX1(n,1),1 < i < n} is a sequence of NOD random variables. When 0 < p < 2, by a(p - n) > 1 and 0 < q < 1, we have a - 1 - a(1 - p-n)q > a - 2 - a(1 - p-n) > 0. Taking v such that v > max{2,p,(ap - 1)/(a - 1/2), (ap - 1)/(a -

2 - a(1 - )q),p 1- n)q}, we get by the Markov inequality, the Cr inequality, the Holder inequality, and Lemma 1.3,

K nap-av-2ut„\ hn„lA ,„\)v c | V(n,1) |v

It < C^nap-av-2h{n)(log{4n))vJ2E\x(r'

n=1 1=1

to / n \ v/2

+ C£nap-av-2h(n)(log(4n)W ^E^H d=f Iti +112.

n=1 \ 1=1 /

By the Cr inequality, Lemma 1.4, and Lemma 1.5, we have

In < Cj2nap-av-2h(n)(log(4n)yJ2E{\Xi\vI(\Xi\ < naq) + naqvP(\X\ > naq))

n=1 1=1

< ^En^P-av-1h(n)(log(4n))VE{\X\vI(\X\ < naq) + naqvP(\X\ > naq))

< ^enai-(1-q)v+P-q<P-"))-1h(n)(log(4n))VE\X\p-^ < to.

By the Cr inequality and Lemma 1.4,

I12 < Cj2nap-av-2h(n)(log(4n))v ^(E\X\2I(\Xi\< naq) + n2aqP(\X\ > naq))

n=1 I 1=1 J

< C^yp-2-(a-1/2)vh(n)(log(4n))V{E\X\2I(\X\ < naq) + n2aqP(\X\ > naq)}v/2.

When p >2,

U2 < CYjnaP-2-(a-1/2)vh{n){log{4n)]v(EX2)v/2 < TO.

When 0 < p < 2,

Ii2 < C^yp-2-(a-1/2)vh(n)(log(4n))V(E\X\P-n)V/2naq{2-(p-n)]v/2

< Cj2nap-2-[a-2-a(1-P-1 )q)vh(n)(log(4n))v < to.

Therefore, (2.9) holds for I2. Define r/n,2) = (X - naq)I(naq < X < na + naq) + naI(Xi > na + naq), 1 < i < n, n > 1, since X(n,2) = r/n,2) + (X - naq - na)I(X > na + naq), we have

I2 < J2nap-2h(n)p[^2, Y(n,2) > ena/6 I

n=1 \ i=1 )

+ J2nap-2h(n)pij2(Xi - naq - na)I(Xi > na + naq) > sna/6 I

n=1 i=1

d=f I21+122. (2.10)

By Lemma 1.5, (2.1), and a standard computation, we have

ro n ro n

I22 < J2nap-2h(n)J2P(Xi > na + naq) <J2nap-2h(n)J2P(\Xi\

n=1 i=1 n=1 i=1

< C J2nap-1h(n)P(\X\ > na) < C + CE\X\ph(\X\1/a) < ro.

Now we prove I21 < ro. By (2.1) and Lemma 1.4, we have

0 < EYi"1'22

(2.11)

i=1 -a n

n-£n=1 EXil(Xi > naq),

if p >1,

n-a£n=1{E|Xi|/(|Xi| < 2na) + naP(|Xi| > 2naq)}, if 0 <p < 1

Cn-{a(p-n)q-1}-a(1-q)E|X|p-n, ifp >1,

^ 0, n ^to.

Cn^^^EX^, if 0 <p < 1

Therefore, in order to prove 121 < to, it is enough to prove that

to in \

I21 <J2nap-2h(n)p(j2(Y(n,2) -EY(n,2)) > ena/12 I < to.

n=1 \ i=1 )

(2.12)

Taking v such that v > max{2, op/j, ap-nH}, we get by Lemma 1.1, the Markov inequality, the Cr inequality, the Holder inequality, and Lemma 1.2,

121 < CY^nap-av-2h(n)E

nap-av-2h(n)J2 E\y(n,2)\v + Cj2nap-av-2h(n)ij2 E(Yi(

(n,2))2

= 1211 + 1212.

By the Cr inequality, Lemma 1.4, Lemma 1.5, (2.1), and a standard computation, we have

1211 = Cj2nap-av-2h(n)j2 E\y(«,2)\v

n=1 i=1

< C J2 nap-av-2h(n)J2{EXJ1 (naq < Xi < naq + na) + navP(Xi > naq + na)}

n=1 i=1

< Cj2nap-av-2h(n)j2 {EXiV1 (|Xi| < 2na) + n^P^ > na)}

< Cj2nap-av-1h(n){ Em"^ |X| < 2na) + navP( |X| > na)}

< C + CE|X|ph(|X|1/a) < to

I2*12 < C J2nap-av-2h(n) \J2{EXlI(naq < Xi < naq + n") + n2aPX > naq + na))

n=1 i=1

< C ^nap-av+v/2-2 h(n){EX2I(\X\ < 2na) + n2aP(\X\ > na)}v/2

CE^ nap-(a-1/2)v-2h(n)(EX2)v/2, ifp >2,

CE^ nap-2-{a(p-n)-1}v/2h(n)(E\X\p-n)v/2, ifp < 2

^ro=1 nap-(a-1/2)v-2h(n), ifp >2,

ro n=1

C Ero nap-2-{a(p-n)-1}v/2h(n), ifp < 2

Therefore, (2.12) holds. By (2.10)-(2.12) we get I2 < ro. In a similar way of I2 < ro we can obtain I3 < ro. Thus, (2.2) holds.

(2.2) ^ (2.3). Note that \s"^)\ = \Sn -Xk\ <\Sn\ + \Xk\ = \Sn\ + \Sk -Sk-1\ <\Sn\ + \Sk\ + \Sk-1\ < 3max1<j<n \Sj\, we have (max1<k<n \Snk)\ > sna) c (max1<j<n \Sj\ > sna/3), hence, from (2.2), (2.3) holds.

(2.3) ^ (2.4). Since 2\Sn\< 2-1 \Sn \ = \ 1 £"k=1Snk)\ < max1<k<n \Snk)\, Vn > 2, and \Xk\ = \Sn -S(nk) \ < \Sn\ + \Snk) \, we have (max1<k<n \Xk\ > en") c (\Sn \> ena/2) U (max1<k<n \Snk) \ > sna/2) c (max1<k<n \Snk)\ > ena/4), Vn > 1, hence, from (2.3), (2.4) holds.

(2.2) ^ (2.5). By Lemma 1.5 and (2.3), we have

^Vp-2h(n)P(sup j-a \Sj\ > e

= £ J2 nap-2h(n)P(ssupj-a\Sj\ > e

i=1 2'-1<n<2' j>n

< CJ2 2i(ap-1)h(2i)^ sup j-a\Sj\ > e

i=1 j'>2i-1

< c£ 2i(ap-1)h(2i) \Sj\ > e2a(k-1)

i=1 k=i

cJ^p( ^ax t S'\ > e2a(k-1)^^2i(ap-1)h(2i)

k=1 V2k-1<'<2k i=1

CY* 2k(ap-1)h(2k W max \Sj \ > e2a(k-1)) < ro. 1 j<2k

k=1 1<j<2k

(2.5) ^ (2.6). The proof of (2.5) ^ (2.6) is similar to that of (2.2) ^ (2.4), so it is omitted. □

Theorem2.2 Let a > 1/2, p > 0,ap > 1andh(x) be a slowly varyingfunction at infinity. Let {Xn, n > 1} be a sequence of NOD random variables and X be a random variables possibly

defined on a different space. Moreover, additionally assume that for a < 1, EXn = 0for all

n > 1. If there exist constant D1 > 0 and D2 > 0 such that D 2n-1 ( ) ( ) D 2n-1 ( )

— J2p(|Xi| > x) < p(|X| >x) < —J2p(|Xi| > x), Vx > 0,n > 1,

i=n i=n

then (2.1)-(2.6) are equivalent.

proof From the proof of Theorem 2.1, in order to prove Theorem 2.2, it is enough to show that (2.4) ^ (2.6) and (2.6) ^ (2.1). The proof of (2.4) ^ (2.6) is similar to that of (2.2) ^ (2.5). Now, we prove (2.6) ^ (2.1). Firstly we prove that

lim p(supj-a |Xj'| > e) =0, Ve > 0. (2.13)

n ^TO \j>n '

Otherwise, there are e0 > 0, S >0, and a sequence of positive integers {nk, k > 1}, nk |to such that p(supj>nk j-a |Xj| > e0) > S, Vk > 1. Without loss of generality, we can assume that nk+1 > 2nk, Vk > 1. Therefore, we have

p( sup j-a \Xj\ > eo\ > 5, Vk > 1.

V>2«k By ap >1 we have

^nap-2h(n)p(supj-a \Xj\ > e0

n=1 j>n

to 2nk

> J2 J2 nap-2h(n)p(supj-a \Xj\ > so

k=1 n=nk+1 j>n

> cYn7-1h(nk)P( sup j-a \Xj\ > so) = to,

k=1 k j>2nk

which is in contradiction with (2.6), thus, (2.13) holds. By Lemma 1.1, we get p(supj-a \Xj\ > e) > p( max j-a\Xj\ > s

V ' \n<i<2n

j>n / \n<j<2n

> p( max \Xj \ > (2n)

n<j<2n

> 1 -p( max Xj < (2n)ae) = 1 - /(Xj < (2n)ae) j

2n-1 2n-1

> 1 ^ P(Xj < (2n)ae) =1^ 0(1-^Xj > (2n)ae))

j= n j= n

> 1-exp( -]T P(Xj > (2n)ae)).

By (2.13), we have limn^ E^-1 p(Xj > (2n)ae) = 0, Ve > 0. Therefore, when n is large enough, we have

I2n-1 1 /2n-1

1 - J2 p{xi > (2n)ae) + 1 p{Xj > (2n)a

j= n j= n

p max j-01 X | > e) > 1- H - x ' D/v > ■ ~ ' ^ ' " iv >in„\«/

n<j<2n ' |

2n-1 ( ) > C J2 p(Xj > (2n)ae), Ve >0.

In a similar way, when n is large enough,

p( max j-a |Xj | > e) > C Vp(-X; > (2n)ae), Ve > 0.

n<j<2n

Thus, when n is large enough, we have

n<j<2n

p( max j-a Xj| > e) > C Vp(|Xj| > (2n)ae) > Cnp(|X| > (2n)ae), Ve > 0. (2.14)

n<j<2n

Taking e = 2 a,by (2.6), (2.14), Lemma 1.5, and a standard computation, we have

TO > y nap~2h(n)p(supj-a|Xj|>2-^ > V nap~2h(n)p( max j- |Xj|> 2"

, V j>n i \n<j<2n

n=1 n=1

> ^Enap-1h(n)p(|X| > na)

> CE|X|ph(|X|1a).

Thus, (2.1) holds. □

In the following, let {rn, n > 1} be a sequence of non-negative, integer valued random variables and t a positive random variable. All random variables are defined on the same probability space.

Theorem 2.3 Let a > 1/2, p >0, ap >1 and h(x) >0 be a slowly varying function as x ^ +to. Let {Xn, n > 1} be a sequence of NOD random variables and X be a random variables possibly defined on a different space satisfying the condition (1.1) and (2.1). Moreover, additionally assume that for a < 1, EXn = 0for all n > 1. If there exists X >0 such that ETOd nap-2h(n)p(n < X)< to, then

J2nap-2h(n)p{ Sn |> e^) < TO, Ve > 0. (2.15)

proof Note that

(|Stb | > eTan) c (Tn/n < X) U (|Stb | > eTan, Tn > Xn) c (Tn/n < X) U (sup fa |Sj| > e).

V> Xn '

Thus, by (2.5) of Theorem 2.1, we have (2.15). □

Theorem 2.4 Let a > 1/2, p > 0, ap >1 and h(x) be a slowly varying function at infinity. Let {Xn, n > 1} be a sequence of NOD random variables and X be a random variables possibly defined on a different space satisfying the condition (1.1) and (2.1). Moreover, additionally assume that for a < 1, EXn = 0for all n > 1. If there exists 0 >0 such that £o=! nap-2h(n)P(| n - t | > 0)< to with P(t < B) = 1 for some B >0, then

J2nap-2h(n)P(\Stn | > sna) < to, Ve > 0.

Proof Note that

(|Stb I > ena) c

Tn t n

> 0 U |Sr„ I >ena,

Tn t n

c (^n - t > ^ U (|Stb | > ena, Tn < (t + 0)n)

c (^T- - t > ^ U (|Sts | > ena, Tn < (B + 0 )n)

c( — - t > 0) u( max |S;| > ena\. \ n / \1</<(B+0 )n /

Thus, by (2.2) ofTheorem 2.1, we have (2.16).

(2.16)

Competing interests

The authors declare that they have no competing interests. Authors' contributions

Allauthors contributed equally to the writing of this paper. Allauthors read and approved the finalmanuscript Author details

1 Schoolof Mathematics and Statistics, Guangdong University of Finance and Economics, Guangzhou, 510320, P.R. China. 2College of Science, Guilin University of Technology, Guilin, 541004, P.R. China. 3 Department of Mathematics, Jinan University, Guangzhou, 510630, P.R. China.

Acknowledgements

The authors would like to thank the referees and the editors for the helpfulcomments and suggestions. This work was supported by the NationalNaturalScience Foundation of China (Grant No. 11271161).

Received: 14 November 2013 Accepted: 26 March 2014 Published: 09 Apr 2014

References

1. Hsu, P, Robbins, H: Complete convergence and the law of large numbers. Proc. Natl. Acad. Sci. USA 33, 25-31 (1947)

2. Baum, IE, Katz, M: Convergence rates in the law of large numbers. Trans. Am. Math. Soc. 120,108-123(1965)

3. Baek, J, Park, ST: Convergence of weighted sums for arrays of negatively dependent random variables and its applications. J. Stat. Plan. Inference 140, 2461-2469 (2010)

4. Bai, ZD, Su, C:The complete convergence for partialsums of i.i.d. random variables. Sci. China Ser. A 5, 399-412 (1985)

5. Chen, P, Hu, TC, Liu, X, Volodin, A: On complete convergence for arrays of rowwise negatively associated random variables. Theory Probab. Appl. 52, 323-328 (2007)

6. Gan, S, Chen, P: Strong convergence rate of weighted sums for negatively dependent sequences. Acta. Math. Sci. Ser. A 28, 283-290 (2008) (in Chinese)

7. Gut, A: Complete convergence for arrays. Period. Math. Hung. 25, 51-75 (1992)

8. Kuczmaszewska, A: On complete convergence in Marcinkiewica-Zygmund type SLLN for negatively associated random variables. Acta Math. Hung. 128(1-2), 116-130 (2010)

9. Liang, HY, Wang, L: Convergence rates in the law of large numbers for fi-valued random elements. Acta Math. Sci. Ser. B 21,229-236 (2001)

10. Peligrad, M, Gut, A: Almost-sure results for a class of dependent random variables. J. Theor. Probab. 12, 87-104 (1999)

11. Qiu, DH, Chang, KC, Antonini, RG, Volodin, A: On the strong rates of convergence for arrays of rowwise negatively dependent random variables. Stoch. Anal. Appl. 29, 375-385 (2011)

12. Sung, SH: Complete convergence for weighted sums of random variables. Stat. Probab. Lett. 77, 303-311 (2007)

Page 12 of 12

13. Sung, SH: A note on the complete convergence for arrays of rowwise independent random elements. Stat. Probab. Lett. 78, 1283-1289 (2008)

14. Taylor, RL, Patterson, R, Bozorgnia, A: A strong law of large numbers for arrays of rowwise negatively dependent random variables. Stoch. Anal. Appl. 20,643-656 (2002)

15. Wang, XM: Complete convergence for sums of NA sequence. Acta Math. Appl. Sin. 22,407-412 (1999)

16. Zhang, LX, Wang, JF: A note on complete convergence of pairwise NQD random sequences. Appl. Math. J. Chin. Univ. Ser.A 19,203-208 (2004)

17. Shao, QM: A comparison theorem on moment inequalities between negatively associated and independent random variables. J. Theor. Probab. 13, 343-356(2000)

18. Joag-Dev, K, Proschan, F: Negative association of random variables with applications. Ann. Stat. 11, 286-295 (1983)

19. Bozorgnia, A, Patterson, RF, Taylor, RL: Limit theorems for dependent random variables. In: Proc. of the First World Congress of Nonlinear Analysts '92, vol. II, pp. 1639-1650. de Gruyter, Berlin (1996)

20. Ko, MH, Han, KH, Kim, TS: Strong laws of large numbers for weighted sums of negatively dependent random variables. J. Korean Math. Soc. 43,1325-1338 (2006)

21. Ko, MH, Kim, TS: Almost sure convergence for weighted sums of negatively dependent random variables. J. Korean Math. Soc. 42, 949-957(2005)

22. Asadian, N, Fakoor, V, Bozorgnia, A: Rosenthal's type inequalities for negatively orthant dependent random variables. J. Iran. Stat. Soc. 5(1-2), 69-75 (2006)

23. Stout, WF: Almost Sure Convergence. Academic Press, New York (1974)

24. Seneta, E: Regularly Varying Function. Lecture Notes in Math., vol. 508. Springer, Berlin (1976)

10.1186/1029-242X-2014-145

Cite this article as: Qiu et al.: Complete convergence for negatively orthant dependent random variables. Journal of Inequalities and Applications 2014, 2014:145

Submit your manuscript to a SpringerOpen journal and benefit from:

► Convenient online submission

► Rigorous peer review

► Immediate publication on acceptance

► Open access: articles freely available online

► High visibility within the field

► Retaining the copyright to your article

Submit your next manuscript at ► springeropen.com