# Complete convergence for weighted sums of arrays of rowwise ρ˜-mixing random variablesAcademic research paper on "Mathematics"

0
0
Share paper
Academic journal
J Inequal Appl
OECD Field of science
Keywords
{""}

## Academic research paper on topic "Complete convergence for weighted sums of arrays of rowwise ρ˜-mixing random variables"

﻿0 Journal of Inequalities and Applications

a SpringerOpen Journal

RESEARCH

Open Access

Complete convergence for weighted sums of arrays of rowwise p-mixing random variables

Aiting Shen, Ranchao Wu, Xinghui Wang and Yan Shen*

"Correspondence: shenyan@ahu.edu.cn Schoolof MathematicalScience, Anhui University, Hefei, 230601, RR. China

Abstract

Let {Xni, i > 1,n > 1} be an array of rowwise p-mixing random variables. Some sufficient conditions for complete convergence for weighted sums of arrays of rowwise p-mixing random variables are presented without assumptions of identical distribution. As applications, the Baum and Katz type result and the Marcinkiewicz-Zygmund type strong law of large numbers for sequences of p-mixing random variables are obtained. MSC: 60F15

Keywords: p-mixing sequence; arrays of rowwise p-mixing random variables; complete convergence; complete consistency

ringer

1 Introduction

The concept of complete convergence was introduced by Hsu and Robbins [1] as follows. A sequence of random variables {Un, n > 1} is said to converge completely to a constant C if£ P(\ Un - C| > s)< to for all s > 0. In view of the Borel-Cantelli lemma, this implies that Un ^ C almost surely (a.s.). The converse is true if the {Un, n > 1} are independent. Hsu and Robbins [1] proved that the sequence of arithmetic means of independent and identically distributed (i.i.d.) random variables converges completely to the expected value if the variance of the summands is finite. Erdos [2] proved the converse. The result of Hsu-Robbins-Erdos is a fundamental theorem in probability theory and has been generalized and extended in several directions by many authors. See, for example, Spitzer [3], Baum and Katz [4], Gut [5], Zarei [6], and so forth. The main purpose of the paper is to provide complete convergence for weighted sums of arrays of rowwise p-mixing random variables.

Firstly, let us recall the definitions of sequences of p-mixing random variables and arrays of rowwise p-mixing random variables.

Let {Xn, n > 1} be a sequence of random variables defined on a fixed probability space (ft, F, P). Write Fs = a (X,, i e 5 c N). Given two a-algebras B, R in F, let

(HV\- \EXY - EXEY\

p( , ) = Xei2(B)Tei2R) (VarXVar Y)1/2' Define the p-mixing coefficients by

p(k) = sup{p(Fs, Ft) : finite subsets S, T c N such that dist(5, T) > k}, k > 0.

Obviously, 0 < p(k + 1) < p(k) < 1, and p(0) = 1.

© 2013 Shen et al.; licensee Springer. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Definition 1.1 A sequence {Xn, n > 1} of random variables is said to be a p-mixing sequence if there exists k e N such that p(k) < 1.

An array {Xni, i > 1, n > 1} of random variables is called rowwise p-mixing random variables if for every n > 1, {Xni, i > 1} is a sequence of p-mixing random variables.

p-mixing random variables were introduced by Bradley [7] and many applications have been found. p-mixing is similar to p-mixing, but both are quite different. Many authors have studied this concept and provided interesting results and applications. See, for example, Bryc and Smolenski [8], Peligrad [9,10], Peligrad and Gut [11], Utev and Peligrad [12], Gan [13], Cai [14], Zhu [15], Wu and Jiang [16,17], An and Yuan [18], Kuczmaszewska [19], Sung [20], Wang et al. [21-23], and so on.

Recently, An and Yuan [18] obtained a complete convergence result for weighted sums of identically distributed p-mixing random variables as follows.

Theorem 1.1 Let p > 1/a and 1/2 < a < 2. Let {Xn, n > 1} be a sequence of identically distributed p-mixing random variables with EX1 = 0. Assume that {ani,1 < i < n, n > 1} is an array of real numbers satisfying

J2\ani\p = O(ns) for some 0<S <1, (1.1)

№nk = B{1 < i < n : \ani\p >(k + 1)-1} > ne-1/k, Vk > 1,n > 1. (1.2) Then the following statements are equivalent:

(i) E\X1\p < TO;

(ii) H npa-2P(max1<;<n \ Yh=i aniXi \ > sna )< to for all e >0.

Sung [20] pointed out that the array {ani,1 < i < n, n > 1} satisfying both (1.1) and (1.2) does not exist and obtained a new complete convergence result for weighted sums of identically distributed p-mixing random variables as follows.

Theorem 1.2 Let p > 1/a and 1/2 < a < 2. Let {Xn, n > 1} be a sequence of identically distributed p-mixing random variables with EX1 = 0. Assume that {ani,1 < i < n, n > 1} is an array of real numbers satisfying

^2\ani\q = O(n) for some q > p. (.3)

IfE\X1\p < to, then

Ynpa-2P{ max

n=1 1<j<n

< to, Ve > 0. (1.4)

Conversely, if (1.4) holds for any array {ani,1 < i < n, n > 1} satisfying (1.3), then E\X1 \p < to.

For more details about the complete convergence result for weighted sums of dependent sequences, one can refer to Wu [24, 25], Wang etal. [26, 27], and so forth. The main purpose of this paper is to further study the complete convergence for weighted sums of arrays

of rowwise p-mixing random variables under mild conditions. The main idea is inspired by Baek et al. [28] and Wu [25]. As applications, the results of Baum and Katz [4] from the i.i.d. case to the arrays of rowwise p-mixing setting are obtained. The Marcinkiewicz-Zygmund type strong law of large numbers for sequences of p-mixing random variables is provided. We give some sufficient conditions for complete convergence for weighted sums of arrays of rowwise p-mixing random variables without assumption of identical distribution. The techniques used in the paper are the Rosenthal type inequality and the truncation method.

Throughout this paper, the symbol C denotes a positive constant which is not necessarily the same one in each appearance and fx! denotes the integer part of x. For a finite set A, the symbol denotes the number of elements in the set A. Let I(A) be the indicator function of the set A. Denote logx = lnmax(x, e), X+ = max(X,0) and X- = max(-X,0).

The paper is organized as follows. Two important lemmas are provided in Section 2. The main results and their proofs are presented in Section 3. We get complete convergence for arrays of rowwise p-mixing random variables which are stochastically dominated by a random variable X.

2 Preliminaries

Firstly, we give the definition of stochastic domination.

Definition 2.1 A sequence {Xn, n > 1} of random variables is said to be stochastically dominated by a random variable X if there exists a positive constant C such that

for all x > 0 and n > 1.

An array {Xni, i > 1, n > 1} of rowwise random variables is said to be stochastically dominated by a random variable X if there exists a positive constant C such that

for all x > 0, i > 1 and n > 1.

The proofs of the main results of the paper are based on the following two lemmas. One is the classic Rosenthal type inequality for p-mixing random variables obtained by Utev and Peligrad [12], the other is the fundamental inequalities for stochastic domination.

Lemma 2.1 (cf. Utev and Peligrad [12, Theorem 2.1]) Let {Xn,n > 1} be a sequence of p-mixing random variables, EXi = 0, E\Xi\p < to for some p > 2 and for every i > 1. Then there exists a positive constant C depending only on p such that

P(\Xn\ > x) < CP(\X\ > x)

P |Xni| > x < CP |X| > x

Lemma 2.2 Let {Xni, i > 1, n > 1} be an array of rowwise random variables which is stochastically dominated by a random variable X. For any a > 0 and b > 0, the following two statements hold:

E\Xni\aI(\Xni\ <b) < C1[E\X\aI(|X| <b) + baP(|X| > b)], (2.3)

E\Xni\aI(\Xni\ > b) < C2E\X\aI(\X\ > b), (.4)

where C1 and C2 are positive constants.

Proof The proof of this lemma can be found in Wu [29] or Wang et al. [30]. □

3 Main results and their applications

In this section, we provide complete convergence for weighted sums of arrays of rowwise p-mixing random variables. As applications, the Baum and Katz type result and the Marcinkiewicz-Zygmund type strong law of large numbers for sequences of ¡5-mixing random variables are obtained. Let {Xni, i > 1, n > 1} be an array of rowwise ¡5-mixing random variables. We assume that the mixing coefficients /5(-) in each row are the same.

Theorem 3.1 Let {Xni, i > 1, n > 1} be an array of rowwise ¡5-mixing random variables which is stochastically dominated by a random variable X andEXni = 0for all i > 1, n > 1, j > -1. Let {ani, i > 1, n > 1} bean array of constants such that

sup\ani \ = 0(n-r) for some r >0 (3.1)

J2\am\ = O(na ) for some ae [0, r). (3.2)

Assume further that 1 + a + j >0 and there exists some S >0 such that air + 1<5 < 2 and s = max(1 + ^^, S). IfE\X\s < to, then for all e >0,

Y^n^W max ^ \i</<« n=l \

^ ^ aniXn

> ^ < TO. (3.3)

If l + a + p <0 andE\X\ < to, then (3.3) still holds for all e >0.

Proof Without loss of generality, we assume that a„i > 0 for all i > l and n > l (otherwise, we use a+i and a-i instead of a„i respectively, and note that a„i = a+i - a-i). From the conditions (3.l) and (3.2), we assume that

sup ani = n

ani = n

n > l.

If 1 + a + f <0 and E\X\ < to, then the result can be easily proved by the following:

}nfPI max

\1<7<«

^ ] aniXn

< Cj^n^E\ max

1 l</'<n n=l

V^ E\a,

n=l i=l

^ ] aniXm

< ^n^E\aniXni\

n=l i=l

< Cj2na+PE\X\ < œ.

In the following, we consider the case of l + a + ff >0. Denote X'ni = aniXnil (\aniXni Kl), i > l, n > l.

It is easy to check that for any £ >0,

I l<j<n

C ( max \aniXni \ > l ) m max

Vl<i<n ' \ l<j'<n

which implies that

PI max

\l£/£n

P( max \aniXni\ > l ) + PI max

Vl<i<n ' \ l<j'<n

<J2H\aniXni\ >1) + P max

i=l yl</'£n

J2(Xni - EXni)

> £ - max

J2EXn i

Firstly, we show that

J2EXn i

^ 0 as n ^œ.

Actually, by the conditions EXni = 0, Lemma 2.2, (3.4) andE\X\l+a/r < œ (since E\X\s < œ), we have that

J2EXn i

Y2EaniXniJ(\aniXni \ < l)

^ ] EaniXniI( \ aniXni |>l

<J2E\aniXni\1+a/r!(x\aniXni\ >l)

< cYai+ialrE\X\1+alrl(\X\ >

i=1 ni ani

< c( sup an)" rJ2aniE\X\1+alrl(\X\ > nr)

< C(n-r) naE\X\1+alrl(\X\ > nr) = CE\X\1+alrl(\X\ > nr) ^ 0 as n ^to,

which implies (3.6). It follows from (3.5) and (3.6) that for n large enough,

PI max

y-<j<n

> ej P(\aniXni\ > 1 + ^max

^^ aniXi

Hence, to prove (3.3), we only need to show that

I =J2nrP(\aniXni\>1) < to

J2(X'ni - EX'ni)

J = VnjPI max ^ \1<7<n

(X'n i- Exn i

By (3.4) and E\X\s < to, we can get that

> - < TO. 2

TO n TO n

J2njJ2P(\amXni\> 1 < ^n^P(\aniX\ > 1

n=1 i=1

n P \ an

n=1 i=1

n=1 i=1

< C^Y. amE\X\l{ \X\ > ^

n=1 i=1 ^

< C^V+j E\X\I (\X\ > nr)

< C^na+^E\X\I(kr < \X\ < (k + 1)r)

n=1 k=n

= C J2 J2 na+jE\X\I(kr <\X\ <(k + 1)r)

k=1 n=1

< Cj2kl+a+jE\X\I(kr < \X\ < (k + 1)r)

< C^>\X\1+(1+a+j)lrI(kr < \X\ < (k + 1)r)

< CE\X\1+(1+a+j)lr < TO,

which implies (3.7).

By Markov's inequality, Lemma 2.1, Cr's inequality and Jensen's inequality, we have for M > 2 that

Enf PI max

\ 1<j<n n=1 \

< Cj^nfE max

\ 1<j<n

= /1+ J2.

£(Xi - EXni) j

J2(X«i - EXni) i=1

\ M/2 n

EE|Xni|M + EE\X'ni |

w . _ 2(1 + f) 1 + a + f

M > ma^ -TvT,1 +-

r[5 -(1 + a/r)] r

which implies that f - r[S -(1 + a/r)]M/2 < -1 and a + f - r(M -1) < -1. Since E\X\S < to, we have by Lemma 2.2, Markov's inequality and (3.4) that

/1 = ^n^ E|Xn i I2

n=1 \ i=1 /

to r n

= C^nTY.E\aniXni\2I(\aniXni\ < 1)

n=1 L i=1

to r n n

< CYn Y,P(\aniX\ > 1 + £E\aniX\2l(\aniX\ < 1)

n=1 L i=1 i=1

TO / n \M/2

< C£nfl£asniE\X\5) (since 5 < 2)

n=1 i=1

(sup ani) Y an

vi>1 ' > i=1

< CYnf [n-r(5-1) • na]M/2

= CYnf-r[S-(1+a/r)]M/2 < to.

By Lemma 2.2 again, we can see that

/2 = CiXE E|Xn i|M

n=1 i=1

= CT,n'T,E\aniXni\Ml(\aniXni\ < 1)

< Cj2njj: P(\aniX\ > 1 + ^n^ E\aniX\MI(\a^X \ < 1)

n=1 i=1 n=1 i=1

= J3+ J4. (.11)

J3 < to has been proved by (3.7). In the following, we show that J4 < to. Denote

Inj = {i: (nj)r < Hani < [n(j + 1)]r}, n > 1, j > 1.

(3.12)

It is easily seen that Ink n Inj = 0 for k = j and (JS Inj = N for all n > 1. Hence,

J4 = ^n^E\aniX\MI(\aniX\ < 1)

n=1 i=1

< CYn YYE\aniX\MI(\aniX\<1)

n=1 j=1 i &Inj

< CYnjY (\$Inj)(njyM E\X\MI( \X\ < [n(j + 1)]r)

n=1 j=1

TO TO n(j+1)

< Cj2njY(^Inj)(nj)-rM Y E\X\MI{k <\X\1 < k + 1

n=1 j=1

to to 2n

= CYnjY(^Inj)(nj)-rM Ye\X\MI{k <\X\1 < k + 1

n=1 j=1

TO TO n(/+1)

+ CYnjY(jnjrMY E\X\MI(k <\X\1 < k + 1)

n=1 j=1 k=2n+1

= J5 + J6.

(3.13)

It is easily seen that for all m > 1, we have that

na = ^ani = ^ > E(flInj) [n(j + 1))-

i=1 j=1 i Inj j=1

> ^ e&Inj) [n(/+1))-r > £ (BInj) [n(/+1))-

j=m j=m

= Y(\$Inj)[n(j + 1)]-rM [n(m + 1)]r(M-1),

n(m +1) n(j +1)

r(M-1)

which implies that for all m > 1,

^(BInj)(n;)-rM < Cna • n-r(M-1) • m-r(M-1) = Cna-r(M-1) • m-r(M-1)

(3.14)

Therefore,

to to 2n

-rM^ piviMI

/5 = ^n^(ttInj)(nj)-r^E\X\Ml(^ < \X\1 < k +1

n=1 j=1 k=0

< C^nf • na-r(M-1)J2E\ X \ MI(k< \X \ 1 <k + 1

n=1 k=0

< ^ E na+f-r(M-1)E\ X \ MI(k < \ X \1 < k + 1)

n=1 k=0

k=0 n=1

+ Y na+f-r(M-1)E \ X \ MI(k < \ X \ 1 < k + 1

k=2 n=fk/2!

< C + C ^ k1+a+f-r(M-1)E\ X \ MI(k < \ X \ 1 < k + 1

< C + C \ X \ M+h^-(M-1)I(k < \X \1 < k + 1

< C + CE \X \1+r< to (since E\X\ s < to) (3.15)

TO TO nj+1)

/ = ClXE(BInj)(n;)-rM £ E \ X \ MI(k< \X \r <k + 1)

n=1 j=1 k=2n+1

< CY.n0 ^ Y (V«j)(nj)-rME\ X \ MI{k < \X \ 1 < k + 1

n=1 k=2n+1 frk -

j> n 1

TO TO / ¿- \ -r(M-1)

< Cj2nf Y, na-r(M-1U k ) E\ X \ MI(k < \X \ 1 < k + 1

n=1 k=2n+1 n

TO fk/2!

< cYY na+f • k-r(M-1)E \ X \ MI (k < \ X \1 < k +1)

k=2 n=1

< c^k1+a+f-r(M-1)E \ X \ MI(k < \ X \ r < k + 1)

< C^E\ X \M+1±a±f-(M-1)I(k< \X \ 1 <k + 1)

< CE\ X\ 1+1±ia±f < TO (sinceE\X \ s < to). (3.6)

Thus, the inequality (3.8) follows from (3.9)-(3.11), (3.13), (3.15) and (3.16). This completes the proof of the theorem. □

Theorem 3.2 Let {Xni, i > 1, n > 1} be an array of rowwise p-mixing random variables which is stochastically dominated by a random variable X andEXni = 0for all i > 1, n > 1.

Let {ani, i > 1, n > 1} bean array of constants such that (3.1) holds and

J2\am\ = 0(1).

IfE\X\ log \X \ < to, then for all e >0,

(3.17)

n-1P max

\1<j<n

> e < TO.

(3.!8)

Proof We use the same notations as those in Theorem 3.1. According to the proof of Theorem 3.1, we only need to show that (3.7) and (3.8) hold, where j =-1 and a = 0. The fact E\X\ log \X\ < to yields that

I =j2n-1£P(\aniXni \ >11

n=1 i=1

< CT>-1 \aniX\ >1

n=1 i=1

< CYY n1E\X\I(kr < \X\ < (k + 1)r

k=1 n=1

< CJ2 log kE\X\I(kr < \X\ < (k + 1)r)

< cJ2e\X\ log \X\I(kr < \X\ < (k + 1)r)

< CE\X\ log \X\ < to,

which implies (3.7) for j = -1. By Markov's inequality, Lemmas 2.1 and 2.2, we can get that

J = > n P max

t! \1<j<n

y(X'ni - EXni

< C£n-1£E|xni|2

n=1 i=1

= C£n-1EE\aniXni\2I(\aniXni\ < 1)

n=1 i=1

< CYn1 yp(\aniX\ >1

n E\ an

n=1 i=1

]n-1Y P\an

n=1 i=1

+ E\an

n=1 i=1

CYn-1Y E\aniX\2I(\aniX\<1)

< C+J5+ J6.

Here, Jj and / are J5 and J when M = 2, respectively. Similar to the proof of J5, we can get that

/5 < C + CE\X\ < to. (.20)

Similar to the proof of J6, we have

to [k/2]

/ < C £ £ • k'rE\X\2j(k <\X\ 7 < k + 1)

k=2 n=1

< C ^ log k • k-r • krE\X\l(k <\X\1 < k + 1)

< CE\X\ log \X\ < to. (.21) This completes the proof of the theorem from the statements above. □

By Theorems 3.1 and 3.2, we can extend the results of Baum and Katz [4] for independent and identically distributed random variables to the case of arrays of rowwise p-mixing random variables as follows.

Corollary 3.1 Let {Xni, i > 1, n > 1} be an array of rowwise p-mixing random variables which is stochastically dominated by a random variable X andEXni = 0for alli > 1, n > 1. (i) Let p > 1 and 1 < t <2. IfE\X \pt < to, then for all e >0,

np 2Pi max

> sn1 i < to.

(ii) IfE\X | log |X| < to, then for all s > 0,

^ 1 ( > -P max

' n \ 1<j<n n=1 \

> s^ < to.

Proof (i) Let a„i = 0 if i > n and a„i = n yt if i < n. Hence, conditions (3.1) and (3.2) hold for r = 1/t and a = 1 - 1/t < r. ji = p - 2 >-1. It is easy to check that

1 1 + a + j a

1 + a + j = p — >0, 1 +-= pt = s, — + 1 = t < pt = s.

Therefore, the desired result (3.22) follows from Theorem 3.1 immediately.

(ii) Let ani = 0 if i > n and ani = n-1 if i < n. Hence, conditions (3.1) and (3.17) hold for r = -1. Therefore, the desired result (3.23) follows from Theorem 3.2 immediately. This completes the proof of the corollary. □

Similar to the proofs of Theorems 3.1-3.2 and Corollary 3.1, we can get the following Baum and Katz type result for sequences of p-mixing random variables.

Theorem 3.3 Let {Xn, n > 1} be a sequence of p-mixing random variables which is stochastically dominated by a random variable X andEXn = Ofor n > 1.

(i) Let p > 1 and 1 < t <2. IfE\X \pt < to, then for all e >0,

> en1 j < to. (.24)

(ii) IfE\X \ log \X\ < to, then for all e >0,

Vnp-2P| max

\1<j<n n=1 \

> —PI max n \ 1</<n

> en ] < to.

By Theorem 3.3, we can get the Marcinkiewicz-Zygmund type strong law of large numbers for ¡5-mixing random variables as follows.

Corollary 3.2 Let {Xn, n > 1} be a sequence of ¡5-mixing random variables which is stochastically dominated by a random variable X andEXn = Ofor n > 1. (i) Letp > 1 and 1 < t <2. IfE\X\pt < to, then

1 ]PXi ^ 0

a.s., n -

(ii) IfE\X\ log \X\ < to, then

- T^Xi ^ 0 a.s., n ^to. (.7)

Proof (i) By (3.24), we can get that for all e > 0,

to > n

Vnp-2P| max

\ 1<j<n

k=0 n=2k

y- np-2P | max

i—1 \1<j<n

> en i

> ETO=0(2k)p-22kP(max1</.<2k \ Ej=1 Xt\ > e2^) ifp > 2, "1 ETO=0(2k+1)p-22kP(max1<j<2k \ Ei=1 Xt\ > e2k+i) if 1 <p <2,

ETO0P(max1<j.<2k \ Ej=1 Xt\ > e2"+") ifp > 2,

. 2 ETO=0 P(max1<j.<2k \ ELX\ > e2¥) if 1 <p <2.

By Borel-Cantelli lemma, we obtain that

max1<j<2k \Ei1 Xi\ , ,„„„.

- k 1--► 0 a.s., k ^ to. (.8)

For all positive integers n, there exists a positive integer k0 such that 2ko 1 < n < 2ko. We have by (3.28) that

\E n=1 Xi\ \E n=1 Xi\ 22 maXl</<2ko \Ei=1 Xi\ n . —4— < . inax. —4— <-=4-1--►0 a.s., ko ^ to,

nt 2k0-1 <n<2k0 nt 2~°t_

which implies (3.26).

(ii) Similar to the proof of (i), we can get (ii) immediately. The details are omitted. This completes the proof of the corollary. □

Remark 3.1 We point out that the cases 1 + a + ¡3 >0 and 1 + a + ¡3 < 0 are considered in Theorem 3.1 and the case 1 + a + ¡3 = 0 is considered in Theorem 3.2, respectively. Theorem 3.1 and Theorem 3.2 consider the complete convergence for weighted sums of arrays of rowwise p-mixing random variables, while Theorem 3.3 considers the complete convergence for weighted sums of sequences of p-mixing random variables. In addition, Theorem 3.1 and Theorem 3.2 could be applied to obtain the Baum and Katz type result for arrays of rowwise p-mixing random variables, while Theorem 3.3 could be applied to establish the Marcinkiewicz-Zygmund type strong law of large numbers for sequences of p-mixing random variables.

Competing interests

The authors declare that they have no competing interests. Authors' contributions

Allauthors read and approved the finalmanuscript Acknowledgements

The authors are most gratefulto the editor Jewgeni Dshalalow and anonymous referees for carefulreading of the manuscript and valuable suggestions which helped in improving an earlier version of this paper. This work was supported by the NationalNaturalScience Foundation of China (11201001,11171001), the NaturalScience Foundation of Anhui Province (1308085QA03,11040606M12,1208085QA03), the 211 project of Anhui University, the Youth Science Research Fund of Anhui University, Applied Teaching ModelCurriculum of Anhui University (XJYYXKC04) and the Students Science Research Training Program of Anhui University (KYXL2012007, kyxl2013003).

Received: 15 March 2013 Accepted: 22 July 2013 Published: 29 July 2013

References

1. Hsu, PL, Robbins, H: Complete convergence and the law of large numbers. Proc. Natl. Acad. Sci. USA 33(2), 25-31 (1947)

2. Erdos, P: On a theorem of Hsu and Robbins. Ann. Math. Stat. 20(2), 286-291 (1949)

3. Spitzer, FL: A combinatoriallemma and its application to probability theory. Trans. Am. Math. Soc. 82(2), 323-339 (1956)

4. Baum, LE, Katz, M: Convergence rates in the law of large numbers. Trans. Am. Math. Soc. 120(1), 108-123(1965)

5. Gut, A: Complete convergence for arrays. Period. Math. Hung. 25(1), 51-75 (1992)

6. Zarei, H, Jabbari, H: Complete convergence of weighted sums under negative dependence. Stat. Pap. 52(2), 413-418 (2011)

7. Bradley, RC: On the spectraldensity and asymptotic normality of weakly dependent random fields. J. Theor. Probab. 5,355-374(1992)

8. Bryc, W, Smolenski, W: Moment conditions for almost sure convergence of weakly correlated random variables. Proc. Am. Math. Soc. 119(2), 629-635 (1993)

9. Peligrad, M: On the asymptotic normality of sequences of weak dependent random variables. J. Theor. Probab. 9(3), 703-715 (1996)

10. Peligrad, M: Maximum of partialsums and an invariance principle for a class of weak dependent random variables. Proc. Am. Math. Soc. 126(4), 1181-1189(1998)

11. Peligrad, M, Gut, A: Almost sure results for a class of dependent random variables. J. Theor. Probab. 12, 87-104 (1999)

12. Utev, S, Peligrad, M: Maximalinequalities and an invariance principle for a class of weakly dependent random variables. J. Theor. Probab. 16(1), 101-115 (2003)

13. Gan, SX: Almost sure convergence for p-mixing random variable sequences. Stat. Probab. Lett. 67, 289-298 (2004)

14. Cai, GH: Strong law of large numbers for p-mixing sequences with different distributions. Discrete Dyn. Nat. Soc. 2006, Article ID 27648 (2006)

15. Zhu, MH: Strong laws of large numbers for arrays of rowwise ¡5-mixing random variables. Discrete Dyn. Nat. Soc. 2007, Article ID 74296 (2007)

16. Wu, QY, Jiang, YY: Some strong limit theorems for ¡5-mixing sequences of random variables. Stat. Probab. Lett. 78(8), 1017-1023 (2008)

17. Wu, QY, Jiang, YY: Strong limit theorems for weighted product sums of ¡5-mixing sequences of random variables. J. Inequal. Appl. 2009, Article ID 174768 (2009)

18. An, J, Yuan, DM: Complete convergence of weighted sums for ¡5-mixing sequence of random variables. Stat. Probab. Lett. 78(12), 1466-1472 (2008)

19. Kuczmaszewska, A: On Chung-Teicher type strong law of large numbers for p-mixing random variables. Discrete Dyn. Nat. Soc. 2008, Article ID 140548 (2008)

20. Sung, SH: Complete convergence for weighted sums of ¡5-mixing random variables. Discrete Dyn. Nat. Soc. 2010, Article ID 630608 (2010)

21. Wang, XJ, Hu, SH, Shen, Y, Yang, WZ: Some new results for weakly dependent random variable sequences. Chinese J. Appl. Probab. Statist. 26(6), 637-648 (2010)

22. Wang, XJ, Xia, FX, Ge, MM, Hu, SH, Yang, WZ: Complete consistency of the estimator of nonparametric regression models based on ¡5-mixing sequences. Abstr. Appl. Anal. 2012, Article ID 907286 (2012)

23. Wang, XJ, Li, XQ, Yang, WZ, Hu, SH: On complete convergence for arrays of rowwise weakly dependent random variables. Appl. Math. Lett. 25,1916-1920 (2012)

24. Wu, QY: Sufficient and necessary conditions of complete convergence for weighted sums of PNQD random variables. J. Appl. Math. 2012, Article ID 104390 (2012)

25. Wu, QY: A complete convergence theorem for weighted sums of arrays of rowwise negatively dependent random variables. J. Inequal. Appl. 2012, 50 (2012). doi:10.1186/1029-242X-2012-50

26. Wang, XJ, Hu, SH, Yang, WZ: Convergence properties for asymptotically almost negatively associated sequence. Discrete Dyn. Nat. Soc. 2010, Article ID 218380 (2010)

27. Wang, XJ, Hu, SH, Yang, WZ: Complete convergence for arrays of rowwise asymptotically almost negatively associated random variables. Discrete Dyn. Nat. Soc. 2011, Article ID 717126 (2011)

28. Baek, JI, Choi, IB, Niu, SL: On the complete convergence of weighted sums for arrays of negatively associated variables. J. Korean Stat. Soc. 37, 73-80 (2008)

29. Wu, QY: Probability Limit Theory for Mixing Sequences. Science Press of China, Beijing (2006)

30. Wang, XJ, Hu, SH, Yang, WZ, Wang, XH: On complete convergence of weighted sums for arrays of rowwise asymptotically almost negatively associated random variables. Abstr. Appl. Anal. 2012, Article ID 31 5138 (2012)

doi:10.1186/1029-242X-2013-356

Cite this article as: Shen et al.: Complete convergence for weighted sums of arrays of rowwise p-mixing random variables. Journal of Inequalities and Applications 2013 2013:356.

Submit your manuscript to a SpringerOpen journal and benefit from:

► Convenient online submission

► Rigorous peer review

► Immediate publication on acceptance

► Open access: articles freely available online

► High visibility within the field

► Retaining the copyright to your article

Submit your next manuscript at ► springeropen.com