Scholarly article on topic 'Complete moment convergence for moving average process generated by ρ − $\rho^{-}$ -mixing random variables'

Complete moment convergence for moving average process generated by ρ − $\rho^{-}$ -mixing random variables Academic research paper on "Mathematics"

0
0
Share paper
Academic journal
J Inequal Appl
OECD Field of science
Keywords
{""}

Academic research paper on topic "Complete moment convergence for moving average process generated by ρ − $\rho^{-}$ -mixing random variables"

Zhang Journal of Inequalities and Applications (2015) 2015:245 DOI 10.1186/s13660-015-0766-5

O Journal of Inequalities and Applications

a SpringerOpen Journal

RESEARCH

Open Access

Complete moment convergence for moving average process generated by p--mixing random variables

CrossMark

Yong Zhang*

"Correspondence: zyong2661@jlu.edu.cn College of Mathematics, Jilin University, Changchun, 130012, P.R. China

Abstract

Let (Yi,-x < i < x} be a sequence of p~-mixing random variables without the assumption of identical distributions, and (a„-x < i < x} be an absolutely summable sequence of real numbers. In this paper, under some suitable conditions, we establish the complete moment convergence for the partial sum of moving average processes (Xn = aiYi+n,n > 1}. These results promote and improve the corresponding results obtained by Li and Zhang (Stat. Probab. Lett. 70:191-197,2004) from NA to the case of a p^-mixing setting.

Keywords: complete moment convergence; moving average process; p^-mixing; Marcinkiewicz-Zygmund strong law of large numbers

1 Introduction

Let (Yi, -x < i < x} be a sequence of random variables and (ai, -x < i < x} be an absolutely summable sequence of real numbers, and for n > 1 set Xn = ^ aiYi+n. The limit behavior of the moving average process (Xn, n > 1} has been extensively investigated by many authors. For example, Baek etal. [1] have obtained the convergence of moving average processes, Burton and Dehling [2] have obtained a large deviation principle, Ibragimov [3] has established the central limit theorem, Rackauskas and Suquet [4] have proved the functional central limit theorems for self-normalized partial sums of linear processes, and Chen etal. [5],Guo [6], Kim etal. [7,8],Ko etal. [9], Li etal. [10], Li and Zhang [11],Qiu et al. [12], Wang and Hu [13], Yang and Hu [14], Zhang [15], Zhen etal. [16], Zhou etal. [17], Zhou and Lin [18], Shen et al. [19] have obtained the complete (moment) convergence of moving average process based on a sequence of dependent (or mixing) random variables, respectively. But very few results for moving average process based on a p--mixing random variables are known. Firstly, we recall some definitions.

For two nonempty disjoint sets S, T of real numbers, we define dist(S, T) = min(\j - k\; j e S, k e T}. Let a (S) be the a -field generated by (Yk, k e S}, and define a (T) similarly.

Definition 1.1 A sequence (Yi, -x < i < x} is called p -mixing, if

ft Spri

ringer

p (s) = sup{p (S, T);S, T c Z, dist(S, T) > s} ^ 0 as s ^x,

© 2015 Zhang. This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use,distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

p-(S, T) = 0 v sup{corrf (Xi, i e S),g (Xj,j e T)),

where the supremum is taken over all coordinatewise increasing real functions f on RS and g on RT.

Definition 1.2 A sequence {Yi, -x < i < x} is called p*-mixing if p*(s) = supjp(S, T); S, T c Z, dist(S, T) > s} ^ 0 as s ^x, where

p(S, T) = supj|corrf,g)|;f e L2(a(S)),g e L2(a(T))}.

Definition 1.3 A sequence {Yi, i e Z} is called negatively associated (NA) if for every pair of disjoint subsets S, T of Z and any real coordinatewise increasing functions f on RS and g on RT

Covjf (Yi, i e S),g(Yj,j e T)} < 0.

Definition 1.4 A sequence {Yi, -x < i < x} of random variables is said to be stochastically dominated by a random variable Y if there exists a constant C such that

Pj| Yi| >x} < CPj|Y| >x}, x > 0,-x < i < x.

Definition 1.5 A real valued function l(x), positive and measurable on [0, x), is said to be slowly varying at infinity if for each k >0, limx^x = 1.

Li and Zhang [11] obtained the following complete moment convergence of moving average processes under NA assumptions.

Theorem A Suppose that {Xn = x-x aiei+n, n > 1}, where {ai, -x < i < x} is a sequence of real numbers with X^x-x |ai | < x and {ei, -x < i < x} is a sequence of identically distributed NA random variables with Es1 = 0, Ee^ < x. Leth be a function slowly varying at infinity, 1 < q <2, r >1 + q/2. Then E|e1|rh(|e1|q) < x implies

^V/q-2-1/qh(n)E

for all e >0.

Chen etal. [20] also established the following results for moving average processes under NA assumptions.

Theorem B Let q > 0, 1 < p < 2, r > 1, rp =1. Suppose that {Xn = x=-x aiei+n, n > 1}, where {ai,-x < i < x} is a sequence of real numbers with^2x-x | ai | < x and {ei,-x < i <

xj isasequence of identically distributed N A random variables. IfEei = 0 andE\si\rp < x, then

Enr-2p

> enllp > < x

for all e > 0. Furthermore ifEe1 = 0 andE\ei\rp < xfor q < rp, E\ei\rp log(l + |ei|) < x for q = rp, E\el\q < x for q > rp, then

nr-2-qlpE max

\ l<k<n

i +\ q

for all e >0.

Recently, Zhou and Lin [18] obtained the following complete moment convergence of moving average processes under p-mixing assumptions.

Theorem C Let h be a function slowly varying at infinity, p > 1, pa >1 and a > 1/2. Suppose that {Xn, n > 1} is a moving average process based on a sequence {Yi, -x < i < x} of identically distributed p-mixing random variables. IfEY1 = 0 and E\ Y1 \p+5h(\Y1\ for some S >0, then for all e >0,

J^npa-2-ah(n)E{ max

l< k< n

^npa-2h(n)E sup

- e > < x.

Obviously, p--mixing random variables include NA and p*-mixing random variables, which have a lot of applications, their limit properties have aroused wide interest recently, and a lot of results have been obtained; we refer to Wang and Lu [2l] for a Rosenthal-type moment inequality and weak convergence, Budsaba et al. [22, 23] for complete convergence for moving average process based on a p--mixing sequence, Tan etal. [24] for the almost sure central limit theorem. But there are few results on the complete moment convergence of moving average process based on a p--mixing sequence. Therefore, in this paper, we establish some results on the complete moment convergence for maximum partial sums with less restrictions. Throughout the sequel, C represents a positive constant although its value may change from one appearance to the next, I {A j denotes the indicator function of the set A.

2 Preliminary lemmas

In this section, we list some lemmas which will be useful to prove our main results. Lemma 2.1 (Zhou [17]) Ifl is slowly varying at infinity, then

(!) H m=i nsl(n) — Cms+1l(m) for s >-1 and positive integer m, (2) Y^, x=m nsl(n) — Cms+1l(m) for s <-1 and positive integer m.

Lemma 2.2 (Wang and Lu [21]) For a positive real number q > 2, if {Xn, n > 1} is a sequence of p--mixing random variables, with EXi = 0, E|Xi |q < x for every i > 1, then for all n > 1, there is a positive constant C = C(q, p-(-)) such that

El max

y 1—n

— C\J2EIXilq + £EXi

Lemma 2.3 (Wang etal. [25]) Let {Xn, n > 1} be a sequence of random variables which is stochastically dominated by a random variable X. Then for any a >0 and b >0,

EIXnIaI{iXnl — b} — C[EIXIaI{|X| — b} + baP(lXI > b)], EIXnIaI{IXnI > b} — CE|X|al{|X| > b}.

3 Main results and proofs

Theorem 3.1 Let l be a function slowly varying at infinity, p > 1, a > 1/2, ap > 1. Assume that {ai,-x < i < x} is an absolutely summable sequence of real numbers. Suppose that {Xn = ^ x-x aiYi+n, n > 1} is a moving average process generated by a sequence {Yi,-x < i < x} of p--mixing random variables which is stochastically dominated by a random variable Y.IfEYi = 0for 1/2 < a — 1,E| Y|pl(| Y|1/a) < xforp > 1 andEIY|1+5 < x for p = 1 and some S >0, then for any e >0

Ynap-2-al(n)E max

J2nap-2l(n)E\sup

n=1 [k>n

k-aj2 x

- e < x.

Proof Firstly to prove (3.1). Letf (n) = nap-2-al(n) and Y^ = -xI{Yj < -x} + Y'I{I YI — x} + xI{Yj > x} and Y^2 = Yj - Y^ be the monotone truncations of {Yj, -x < j < x} for x >0. Then by the property of p--mixing random variables (cf. Property P2 in Wang and Lu [21]), {Yj -EYj, -x < j < x} and {Y®, -x < j < x} are two sequences of p--mixing random variables. Note that ^n=1 Xk = ^ x=-x a^j+"+1 Yj. Since ^x-x |ai| < x, by Lemma 2.3, we have for x > na ,if a >1

E^ai.^

i=-x j=i+1

— x-1£ |ai [EIYjII{ IYj |—x} + xP( | Y | > x)]

i=-x j=i+1

— Cx-1n[E|Y|I{ | Y| — x} + xP(| Y| > x)] — Cn1-a ^ 0, as n ^x.

If 1/2 < a < 1, note ap > 1, this means p >1. ByE\Y\pl(\Y\1/a) < x and l is slowly varying at infinity, for any 0 < e <p - 1/a,we have E\ Y\p-e < x. Then noting EYi = 0, by Lemma 2.3 we have

E£ai]+X

i=-x j=i+l

i=-x j=i+l

< Cx-1J2 \ai^E\Y'\I{\Y'\ >x} < Cx-lnE\Y\I{\Y\ >x}

i=-x j=i+l

< Cxlla-lE\Y\I{\Y\ >x} < CE\Y\llaI{\Y\ >x}

< E\Y\p-1{\Y\ >x} ^ 0, asx ^x.

Hence for x > na large enough, we get

x i+n i=-x j=i+l

< el4.

Therefore

P< max

a l<k<n

jy (n)E

<Ff (n)

n=l Je>

n=l Jn'

x /. x i

V(n)lP

> x dx

P< max

a l<k<n

P< max

a l<k<n

> ex dx

¡=l x i+k

i=-x j=i+l

> exl2 dx

=: Il+12.

a Il<k<n

E^E^" EYd')

i=-x j=i+l

Firstly we show I1 < x. Noting \1X \ < \Y,\I{\Y,\ > x}, by Markov's inequality and Lemma 2.3, we have

> exl4 dx

Il < C

x-lE max

na l<k<n

i=-x j=i+l

x x i+n

Cjy(n) x-lJ2 \ai\J2E

n=l i=-x j=i+l

x /» x

CYV(n) / x-lE\Y\I{\Y\ >x}dx

—i Jna

ai\y \E\Y®\ dx

x x (m+1)a

= Cj^nf(n)£/ x-1E|Y|I{|YI >x}dx

n=1 m=n

— C J2 nf (n)J2 m-1E| YII{IY| > ma}

n=1 m=n

= C J2 m-1E| Y|I{| Y| > ma nap-1-al(n).

m=1 n=1

If p >1, then ap -1 - a > -1, and, by Lemma 2.1, we obtain

I1 — map-1-al(m)E|Y|I{|Y| > ma}

= map-1-al(m)J2E|Y|I{ka < |Y| — (k + 1)a}

m=1 k=m

= C^E|YII{ka < |Y| — (k + 1)a}J2map-1-al(m)

k=1 m=1

— Cj2kap-al(k)E| Y|I{ka < | Y| — (k + 1)a}

— CE| Y|pl(| Y|1/a) < x.

If p = 1, notice that E| Y|1+S < x implies E| Y|1+S'l(|Y|1/a) < x for any 0 < S' < S, then by Lemma 2.1, we obtain

I1 — C J^ m-1E| Y|I{ | Y| > ma}J2 n-1l(n)

m=1 n=1

— C^m-1E|Y|I{|Y| > man-1+aSl(n)

m=1 n=1

— maS -1l(m)E| Y|I{|Y| > ma}

— CE| Y|1+S'l(| Y|1/a) — CE| Y|1+S < x. So, we get

I1 < x. (.4)

Next we show I2 < x. By Markov's inequality, the Holder inequality, and Lemma 2.2, we conclude

I2 — C f(n) x-rE max na 1—k—n

— CYf (n)ja x-rE

Hai-Y, (Yj - EY®)

i=-x j=i+1

V(|ai||ai|1/r max

1—k—n

l № - ey%?) j=i+1

^ cto I ^ \r ^ / ^

Tfn)/ ""(T,M) (12l

n=l n \i=-to / V=-to

TO -to to i+n

jy(n) x-j: mj:e\yÜ-eyH

(l) rv(l)^

£(Y?- EYXi

n=l n i=—TO j=i+l

/»CO to f i+n \ r/2

+ ^Zf(n)i ^E i-ii EE\^-EY"l)

n=l n i=—TO \j=i+l /

=: /2l+ /22, (.5)

TO / i+n

where r > 2 will be specialized later.

For /2l, ifp > l, take r > max{2,p}, then by Cr inequality, Lemma 2.3, and Lemma 2.l, we get

TO^ /»CO ^^

hi < Cjy(n) x-rJ2 iatiY, [EiYjirl{iYji<x} + xrP(iY'i > x)] dx

1 J na ■ ■ ■ 1

n=l n i=—to j=i+l

TO »TO

< CW(n) x-r[EiYir/{iYi<x} + xrP(iYi >x)]dx

TO TO p (m+l)a

< Cj2nf(n)^ [x-rEiY irl{ iY i<x} + P( iY i > x)]dx

n=l m=Jm"

< C ^ nf (n)J2[ma(l-r)-lEi YirI{i Yi < (m + l)a} + ma-lP( i Yi > ma)]

n=l m=n

= CJ2[ma(l-r)-lEi Yi r/{ i Y i < (m + l)a} + ma-lP(i Y i > ma)] ^nf (n)

m=l n=l

< Cj2ma(p~rhll(m)j2 E i Y i rI {ka < i Y i < (k + l)a}

m=l k=l

+ C ^ map-1l(m)^2EI{ka < i Y i < (k + l)a}

m=l k=m

= Cj^Ei Yi rI{r < i Yi < (k + l)a}J2ma(p-r]-ll(m)

k=l m=k

+ C^El{ka < i Yi < (k + l)a} ^m^l(m)

k=l m=l

< C^^<a(p-r)l{k)Ei Y i p i Y i r-p/{r < i Yi < (k + l)a}

+ C ^kapl(k)Ei Y ip i Yi -Pl{ka < i Y i < (k + l)a}

< CEi Yipl( i Yi l/a) < to. (3.6)

For /21, if p = 1, taker > maxjl + 5',2}, where 0 < S' < S, then by the same argumentas above we have

In < CJ2[ma(l-r)-lE\Y\rl{\Y| < (m + l)a} + ma-lP(\ Y| >ma)]J2nf(n)

m=l n=l

< CJ2[ma(l-r)-lE\Y\rl{\Y\ < (m + l)a} + ma-lP(\Y\ > ma)] ^n-1+aS'l(n)

m=l n=l

< CJ2[ma(l-r+s')-ll(m)E\Y\rl{\Y\ < (m + l)a}

+ ma(1+s')-1l(m)E/{\Y| >ma}] < CE\ Y\1+ä'l(\Y\1/a) < CE\Y\1+ä < x. (.7)

For /22, if 1 < p <2, take r >2, note ap + r/2 - apr/2 -1 = (ap -1)(1 - r/2) < 0, by the Cr inequality, Lemma 2.3, and Lemma 2.1, we obtain

< CW/2/(n) x-r[(E\Y\2I{\Y\ <x})r/2 + xrPr/2(\Y\ >x)]dx

n=l Jna

c c /. (m+l)a

< C^nr/2f(n)^ / [x-r(E\Y\2I{\Y\ <x})r/2 + Pr/2(\Y\ >x)]dx

n=l m=^m"

< Cj2nr/2f (n)J2[ma(l-r)-l(E\Y\2I{\Y\ < (m + l)a})r/2 + ma-lPr/2(\Y\ > ma)]

n=l m=n

= CJ2[ma(l-r)-l(E\ Y\2I{ \ Y\ < (m + l)a })r/2 + ma-lPr/2(\Y\ > ma)] ^nr/2f (n)

m=l n=l

< Cj2ma(p-r)+r/2-2l(m)(E\Y\p\Y\2-pI{\Y \ < (m + l)a })r/2

+ C J2map+r/2-2l(m)(E\Y\p\Y\-pI{\Y\ > ma})r/2

< C J2map+r/2-apr/2-2l(m)(E\Y\p)r/2 < c. (.8)

For I22, ifp > 2, take r > (ap -1)/(a - 1/2) > 2; we have a(p - r)+r/2-2 < -1, and therefore one gets

I22 < Cj2[ma(l-r)-l{E\Y\2I{\Y\ < (m + l)a})r/2 + ma-lPr/2(\Y\ >m")]^nr/2f(n)

m=l n=l

< Cj2ma(p-r)+r/2-2l(m)(E\Y\2I{\Y \ < (m + l)a })r/2

+ C J2map+r/2-2l(m)(E\ Y\2 \Y\-2I{\ Y\ > ma })r/2

< c£>a(p-r)+r/2-2l(m)(E \ Y \2)r/2 < x.

Thus, (3.1) can be deduced by combining (3.3)-(3.9). Now, we show (3.2). By Lemma 2.1 and (3.1) we have

k-ae x

n=2'" x /. x f

< C^ P sup

i=1 J0 [k>2i-1

J2nap-2Kn)E\ sup

n=1 lk>n

x p x f

= V nap-2l(n) P sup

n=1 J0 [k>n

x 2' -1 x I

= ee n^p-2l(nw p sup

i=1 n=2i-1 70 lk>n

k-ae x

CJ2 2i(ap-1)l( 2i) P sup i=1 ./0 [k>2i-1

x x „ '

2i(ap-1)l(/ 0

> e + t > dt

k-ae x

> e + t > dt

> e + 11 dt nap-2l(n)

> e + t > dt

2l-1<k<2l

k-aj2 x

> e + t\ dt

< O I P{ max

2l-1<k<2l

k-aj2 x

x fx f

7_1 I 2l 1

< 2l(ap-

1-aW o/

0 1<k<2l

x /. x f

< cWp-2-al(n) P

= cTnap-2-al(n)E max

>e + t\dtJ2 2i(ap-1)l( 2i) i=1

> (e + t)2(l-1)a J dt

> e2(l-1)a + yjdy > ena 2-a + y\dy

- e0na ) < x.

Hence the proof of Theorem 3.1 is completed.

The next theorem treats the case ap = 1.

Theorem 3.2 Let l be a function slowly varying at infinity, 1 < p <2. Assume that £ix-x \ ai\ 0 < x, where 0 belong to (0,1) ifp = 1 and 0 = 1 if 1 <p <2. Suppose that {Xn = £ix-x aiYi+n, n > 1} is a moving average process generated by a sequence { Y', -x < i < x} of p--mixing random variables which is stochastically dominated by a random variable Y.

IfEYi = 0 and E\Y \pl(\Y\p) < x, then for any e >0

n-1-1/pl(n)E max

- en1/p > < x.

Proof Let g(n) = n 1 1/pl(n). Similarly to the proof of (3.3), we have

Eg(n)E

x /. x (

g(n) P

ti JnVp [

x /. x (

g(n) P ~f JnVp

E ai EYi2)

i=-x j=i+1

> ex/2) dx

=: /1+ J2.

E^E (YX1)- *Y?)

i=-x j=i+1

> ex/4 > dx

For J1, by Markov's inequality, the Cr inequality, Lemma 2.3, and Lemma 2.1, one gets

/1 < C

x e E max

jnHp 1<k<n

Cj^ng(n) x~e E\Y fl{ \ Y \ > x)dx JnUp

i=-x j=i+1

x x p (m+1)1/p

C^ng(n)E x-°E\Y\eI{\Y\ >x}dx

n=1 m=n

< Cj^ng(n)J2m(1-e)/p-1E\ Y\eI{\ Y\ > m1/p}

n=1 m=n

= ^Em(1-e)/p-1E\Y\eI{\Y\ > m1/p^ng(n)

m=1 n=1

< Cj2m~e/pl(m)E\Y\e/{\Y\ >m1/p}

= Cj2m~e/pl(m)J2E\Y\eI{k1/p < \Y\ <(k + 1)1/p}

m=1 k=m

= C^E\Y\e I{k1/p < \ Y\ < (k + 1)1/p} ^m e/pl(m)

< Cj^k1^/pl(k)E\Y\eI{k1/p < \ Y\ < (k + 1)1/p}

< CE|Y|pl |Y|p < x.

For J2, similar to the proof of I2, take r = 2, by Lemma 2.2, Lemma 2.3, and Lemma 2.l, we conclude

J2 < C

x E i max

nl/p l<k<n

TO i+k

E^E^- EYÜ

i=—TO j=i+l

n=l Jn

TO p TO

< CVng(n) / x-2[Ei Yi 2I{ i Yi <x} + x2P( i Y i >x)]dx

n=l •/nl/p

TO TO /. (m+l)l/p

= CTng(n)T x-2[Ei Yi 2I{ i Yi <x} + x2P( i Y i >x)]dx

n=l m=n ml/p

< Cj^ng(n)j2[m-l-l/pE i Yi 2I{ i Y i < (m + l)l/p} + ml/p-lP( i Y i > ml/p)]

n=l m=n

= CJ2[m-l-l/pE i Yi 2I{ i Yi < (m + l)l/p} + ml/p-lP( i Y i > ml/p)] ^ng(n)

m=l n=l

< CJ2[m-2/pl(m)E i Yi2I{i Y i < (m + l)l/p} + l(m)P(i Y i > ml/p)]

< CEi Y i pl( i Y i p) < to. Hence from (3.ll)-(3.l3), (3.l0) holds.

(3.l3)

For the complete convergence and strong law of large numbers, we have the following corollary from the above theorems immediately.

Corollary 3.3 Under the assumptions of Theorem 3.1, for any e >0 we have

Wp-2l(n)P max

> en > < to.

Under the assumptions of Theorem 3.2, for any e >0 we have

En-»P

> enl/p > < to;

(3.l5)

in particular, the assumptions EY' = 0 andE \ Y \p < x imply the following Marcinkiewicz-Zygmund strong law of large numbers:

lim —— y Xj = 0 a.s.

(3.l6)

Remark 3.4 Corollary 3.3 provides complete convergence for the maximum of partial sums, which extends the corresponding results of Budsaba etal. [22, 23] and Theorem 1 of Baek et al. [1] with less restrictions. Since p--mixing random variables include NA and p*-mixing random variables, our results also hold for NA and p*-mixing, and therefore

Theorem 3.1 improves upon the above Theorem A from Li and Zhang [11] with less restrictions, and our results also extend and generalize the above Theorem B from Chen et al. [20] with q = 1 partly.

Remark 3.5 Obviously, the assumption that [Yi,-œ < i < œj is stochastically dominated by a random variable Y is weaker than the assumption of identical distribution of the random variables [Yi,-œ < i < œj, therefore the above results also hold for identically distributed random variables.

Remark 3.6 Let a0 = 1, ai = 0, i = 0, then Sn = J2n=1 Xk = J2n=1 Yk. Hence the above results hold when {Xk, k > 1} is a sequence of p--mixing random variables which is stochastically dominated by a random variable Y.

Competing interests

The author declares to have no competing interests. Acknowledgements

This work was supported by National Natural Science Foundation of China (Grant No. 11101180) and the Science and Technology Development Program of Jilin Province (Grant No. 20130522096JH).

Received: 17 April 2015 Accepted: 23 July 2015 Published online: 08 August 2015

References

1. Baek, J I, Kim, TS, Liang, HY: On the convergence of moving average processes under dependent conditions. Aust. N. Z. J. Stat. 45,331-342 (2003)

2. Burton, RM, Dehling, H: Large deviations for some weakly dependent random processes. Stat. Probab. Lett. 9,397-401 (1990)

3. Ibragimov, IA: Some limit theorem for stationary processes. Theory Probab. Appl. 7,349-382 (1962)

4. Rackauskas, A, Suquet, C: Functional central limit theorems for self-normalized partial sums of linear processes. Lith. Math. J. 51 (2), 251-259 (2011)

5. Chen, PY, Hu, TC, Volodin, A: Limiting behaviour of moving average processes under y-mixing assumption. Stat. Probab. Lett. 79,105-111 (2009)

6. Guo, ML: On complete moment convergence of weighted sums for arrays of row-wise negatively associated random variables. Stochastics 86(3), 415-428 (2014)

7. Kim, TS, Ko, MH: Complete moment convergence of moving average processes under dependence assumptions. Stat. Probab. Lett. 78(7), 839-846 (2008)

8. Kim, TS, Ko, MH, Choi, YK: Complete moment convergence of moving average processes with dependent innovations. J. Korean Math. Soc. 45(2), 355-365 (2008)

9. Ko, MH, Kim, TS, Ryu, DH: On the complete moment convergence of moving average processes generated by p*-mixing sequences. Commun. Korean Math. Soc. 23(4), 597-606 (2008)

10. Li, DL, Rao, MB, Wang, XC: Complete convergence of moving average processes. Stat. Probab. Lett. 14,111-114 (1992)

11. Li, YX, Zhang, LX: Complete moment convergence of moving average processes under dependence assumptions. Stat. Probab. Lett. 70,191-197 (2004)

12. Qiu, DH, Liu, XD, Chen, PY: Complete moment convergence for maximal partial sums under NOD setup. J. Inequal. Appl. 2015,58(2015) 12 pp

13. Wang, XJ, Hu, SH: Complete convergence and complete moment convergence for martingale difference sequence. Acta Math. Sin. Engl. Ser. 30(1), 119-132 (2014)

14. Yang, WZ, Hu, SH: Complete moment convergence of pairwise NQD random variables. Stochastics 87(2), 199-208 (2015)

15. Zhang, LX: Complete convergence of moving average processes under dependence assumptions. Stat. Probab. Lett. 30,165-170(1996)

16. Zhen, X, Zhang, LL, Lei, YJ, Chen, ZG: Complete moment convergence for weighted sums of negatively superadditive dependent random variables. J. Inequal. Appl. 2015,117 (2015)

17. Zhou, XC: Complete moment convergence of moving average processes under y-mixing assumptions. Stat. Probab. Lett. 80,285-292 (2010)

18. Zhou, XC, Lin, JG: Complete moment convergence of moving average processes under p-mixing assumption. Math. Slovaca 61(6), 979-992 (2011)

19. Shen, AT, Wang, XH, Li, XQ, Wang, XJ: On the rate of complete convergence for weighted sums of arrays of rowwise 0-mixing random variables. Commun. Stat., Theory Methods 43,2714-2725 (2014)

20. Chen, PY, Hu, TC, Volodin, A: Limiting behaviour of moving average processes under negative association assumption. Theory Probab. Math. Stat. 77,154-166 (2007)

21. Wang, JF, Lu, FB: Inequalities of maximum of partial sums and weak convergence for a class of weak dependent random variables. Acta Math. Sin. 22,693-700 (2006)

22. Budsaba, K, Chen, PY, Volodin, A: Limiting behavior of moving average processes based on a sequence of p- mixing random variables. Thail. Stat. 5,69-80 (2007)

23. Budsaba, K, Chen, PY, Volodin, A: Limiting behavior of moving average processes based on a sequence of p mixing and NA random variables. Lobachevskii J. Math. 26, 17-25 (2007)

24. Tan, XL, Zhang, Y, Zhang, Y: An almost sure centrallimit theorem of products of partialsums for p- mixing sequences. J. Inequal. Appl. 2012, 51 (2012). doi:10.1186/1029-242X-2012-51

25. Wang, XJ, Li, XQ, Yang, WZ, Hu, SH: On complete convergence for arrays of rowwise weakly dependent random variables. Appl. Math. Lett. 25,1916-1920 (2012)

Submit your manuscript to a SpringerOpen journal and benefit from:

► Convenient online submission

► Rigorous peer review

► Immediate publication on acceptance

► Open access: articles freely available online

► High visibility within the field

► Retaining the copyright to your article

Submit your next manuscript at ► springeropen.com