Scholarly article on topic 'Complete moment convergence of moving average process generated by a class of random variables'

Complete moment convergence of moving average process generated by a class of random variables Academic research paper on "Mathematics"

0
0
Share paper
Academic journal
J Inequal Appl
OECD Field of science
Keywords
{""}

Academic research paper on topic "Complete moment convergence of moving average process generated by a class of random variables"

Ko Journalof Inequalities and Applications (2015) 2015:225 DOI 10.1186/s13660-015-0745-x

<3 Journal of Inequalities and Applications

a SpringerOpen Journal

RESEARCH

Open Access

Complete moment convergence of moving average process generated by a class of random variables

Mi-Hwa Ko*

CrossMark

Correspondence: songhack@wonkwang.ac.kr Division of Mathematics and InformationalStatistics, Wonkwang University, Jeonbuk, 570-749, Korea

Abstract

In this paper, we establish the complete moment convergence of a moving average process generated by the class of random variables satisfying a Rosenthal-type maximal inequality and a weak mean dominating condition with a mean dominating variable.

MSC: 60F15

Keywords: complete moment convergence; moving average process; Rosenthal-type maximal inequality; weak mean domination; slowly varying

ft Spri

ringer

1 Introduction

Let {Yi, -x < i < x} be a doubly infinite sequence of random variables with zero means and finite variances and {ai, -x < i < x} an absolutely summable sequence of real numbers. Define a moving average process {Xn, n > 1} by

Xn =J2 aiYi+n, n > 1. (1.1)

The concept of complete moment convergence is as follows: Let {Yn, n > 1} be a sequence of random variables and an > 0, bn > 0. If ^ |=1 anE{b-1 \Yn \ - e}+ < x for all e > 0, then we call that {Yn, n > 1} satisfies the complete moment convergence. It is well known that the complete moment convergence can imply the complete convergence.

Chow [1] first showed the following complete moment convergence for a sequence of i.i.d. random variables by generalizing the result of Baum and Katz [2].

Theorem 1.1 Suppose that {Yn, n > 1} is a sequence of i.i.d. random variables with EY1 = 0. For 1 <p <2 and r >p, ifE{\Yi\r + Y\log(1 + \ Yx\)} < x, then np-2-pE(\Yi\-

enp)+ < x for any e >0.

Recently, under dependence assumptions many authors studied extensively the complete moment convergence of a moving average process; see for example, Li and Zhang [3] for NA random variables, Zhou [4] for ^-mixing random variables, and Zhou and Lin [5] for p-mixing random variables.

© 2015 Ko. This article is distributed under the terms of the Creative Commons Attribution 4.0 InternationalLicense (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use,distribution, and reproduction in any medium, provided you give appropriate credit to the originalauthor(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

We recall that a sequence {Yn, n > 1} of random variables satisfies a weak mean dominating condition with a mean dominating random variable Y if there is some positive constant C such that

for allx >0 and all n > 1 (see Kuczmaszewska [6]).

One of the most interesting inequalities in probability theory and mathematical statistics is the Rosenthal-type maximal inequality. For a sequence {Yi,1 < i < n} of i.i.d. random variables with E| Yl |q < to for q > 2 there exists a positive constant Cq depending only on q such that

The above inequality has been obtained for dependent random variables by many authors. See, for example, Peligrad [7] for a strong stationary p-mixing sequence, Peligrad and Gut [8] for a p*-mixing sequence, Stoica [9] for a martingale difference sequence, and so forth.

In this paper we will establish the complete moment convergence for a moving average process generated by the class of random variables satisfying a Rosenthal-type maximal inequality and a weak mean dominating condition.

2 Some lemmas

The following lemmas will be useful to prove the main results.

Recall that a real valued function h, positive and measurable on [0, to), is said to be slowly varying at infinity if for each X >0

Lemma 2.1 (Zhou [4]) If h is a slowly varyingfunction at infinity and m a positive integer, then

(1) £m=i nth(n) < Cmt+1h(m) for t > -1,

(2) YZm nth(n) < Cmt+1h(m) for t < -1.

Lemma 2.2 (Gut [10]) Let {Xn, n > 1} be a sequence of random variables satisfying a weak dominating condition with a mean dominating random variable X, i.e., there exists some positive constant C

x^rn h(x)

for allx >0 and all n > 1.

Let r >0 and for some A > 0

X = XiI( |Xi| <A), X'/ = XiI( |Xi| > A), X = XiI(|Xi| < A) - AI(Xi < -A) + AI(Xi > A),

X' = XI(|X| < A, X" = XI(|X| > A), X* = XI(|X| <A) - AI(X < -A) +AI(X > A).

Then for some C >0

(1) if EX |r < to, then (n-1) Eti EX! < CE|X|r,

(2) (n-1) Eh E|Xi|r < C(E|X'|r + ArP(|X| > A)) for Any A > 0,

(3) (n-1) ^n=1 E|X'/|r < CEX"^ for any A > 0,

(4) (n-1) Eh E|X*|r < CE|X*|r for any A > 0.

3 Main result

Theorem 3.1 Let h be a function slowly varying at infinity, p > 1, a >2 and ap > 1. Assume that {ai, -to < i < to} is an absolutely summable sequence of real numbers and that |Yi,-TO < i < to} is a sequence of mean zero random variables satisfying a weak mean dominating condition with a mean dominating random variable Y, i.e. there exists some positive constant C

- J2 p{ Y > x) < CP( | Y| > x) for all x > 0,-to < j < to

and alln > 1 andE| Y|ph(| Y|«) < to.

Suppose that {Xn, n > 1} is a moving average process, where Xn = £TO-TO aiYi+n, n > 1 is defined as (1.1).

Assume that for any q > 2, there exists a positive Cq depending only on q such that

El max

I 1<i<n

J2(Yxj- EYxj)

< C^E|Yxj|q + £EYx2

where Yxj- = -xI(Yj < -x) + j(| Y| < x)+ xI( Y > x) for all x >0. Then for all e >0

Ynap-2-ah(n)E max 1<i<n

n=1 I--

- en ) < to

J2nap-2h(n)E\ sup

- e > < TO.

Proof of (3.2) Let Yxj = Yj - Yxj and l(n) = nap-2-a h(n). Recall that EhXk = Eh ETO-to ^Y+k = ETO-to ai Yj by (1.1). If a >1, by the assumption that ^^^ |ai | < to and Lemma 2.2 we have, for x > na,

TO i+n

E j2 aih Yx>

i=-TO j=i+1

< Cx-1n{E|Y|/[|Y|<x] + xP(|Y| >x)}

< Cn1 a ^ 0 as n ^to.

(3.41)

If 2 < a < 1, ap >1 implies p > 1. By the assumption EYi = 0 for all -x < i < x and Lemma 2.2 we obtain

TO i+n

E j2 aij2 Yxj

i=—TO j=i+1

TO i+n

E j2 aij2 Yxj

i=—TO j=i+1

< Cx-1J^ \*i\Y, E\Yj\I [ \Y\ > x]

i=-x j=i+1

< Cx-1nE\Y\I[\Y\ >x] < Cx«-1E\Y\I[\Y\ >x]

< CE\Y\p/[\Y\ > x] ^ 0 as x ^x.

It follows from (3.4i) and (3.4ii) that for x > na large enough,

TO i+n

E j2 aij2 Yx>

i=—TO j=i+1

which yields

J2l(n)E\

I1<k<n

- € n

a \ 1<k<n

Vl(n) / PI max t! Jena \v1<k<,

TOTO £lml P(

TO /.TO /

l(n) P max

f^ t/ na \1<k<n

> x I dx (letting x = ex')

> ex' I dx'

j=1 TO i+k

eaie Yxj

i=-TO =i+1 TO i+k

> — \dx

= /1+ I2.

J^aiJ2(Yxj - EYXj)

i=-TO =i+1

> — dx

(3.4ii)

Now we will by an estimate show that I1 < to. It is clear that | Yxj| < | Y'|I[| Yj| > x]. Hence for I1, by Markov's inequality and Lemma 2.2, we have

I1 < C

n=1 Jna

x-1E max 1<k<n

TO i+k

j2aij2 Yxj

i=-TO j=i+1

TO TO TO i+n

< cZl(n) i x-1£|ai|^

n=1 n -TO j=i+1

a^) EYA dx

TO j=i+1

TO />TO

< CVnl(n) / x-1E| Y|I[|Y| >x]dx

n=1 Jn"

TO TO Am+1)a

= C£nl(n)£/ x-1E| Y|I[|Y| >x]dx

n=1 m=Jm

< Cj2nl(n)j2m-1E| Y|I[|Y| > ma]

n=1 m=n

= C^m-1E|Y|I[|Y| > man^p-1-ah(n). (3.7)

m=1 n=1

Ifp >1, note that ap -1 - a > -1. By Lemma 2.1 and (3.7) we obtain

I1 < C^map-1-ah(m)E|Y|I[|Y| > ma]

= C^map-1-ah(m)J2E|Y|I[ka < |Y| < (k + 1)a]

m=1 k=m

= C^E|Y|I[ka < |Y| < (k + 1)a]J2map-1-ah(m)

k=1 m=1

< Cj2kap-ah(k)EY^[ka < | Y|< (k + 1)a]

< CE| Y|ph(| Y|«) < to. (.8) Ifp = 1, by (3.7), we also obtain

I1 < C^]m-1E|Y|I[| Y| > man-1h(n)

m=1 n=1

< C^]m-1E|Y|I[| Y| > man-1+a5h(n) for any 5 >0

m=1 n=1

< C^ma5-1h(m)E|Y|I[|Y| > ma]

< CE| Y|1+5h(| Y|«) < to. (.9) So, by (3.8) and (3.9) we get

I1 < to forp > 1. (3.10)

For I2, by Markov's inequality, Holder's inequality, and (3.1) we get for any q > 2

to i+k

I2 < C>7(n)/ x-qE max 1< k< n

TO /»C

TO /.c

(YXj- EYXj)

i=-TO j=i+1

< CVl(n)/ x-q

\a,:\ q) \ai\q max

\ 1<k<n

J2(Yxj- EYxj)

l(n) x-q

(x \ q-1 ( x i+k

\aiiEmax ^(Yxj -EYj

i=-x i=-x j=i+1

x x x i+n

J2l(n) x-^ \ai\ £E\ Yxj - EYxj\qdx

n=1 n x

i=-x j=i+1

= : I21 + II22.

x x x i+n

^l(n) \ai\ ( £E\ YXj -EYx

\ 2 dx

i=-x j=i+1

For I21, we consider the following two cases.

If p >1, take q > max{2,p}, then by the assumption that ^x_x \ai\ < x, Cr inequality and Lemmas 2.1 and 2.2 we get

CVnl(n) / x-q{E\Y\qI[\Y\<x] + xqP(\Y\ >x)}dx

x x Am+1)a

< C£nl(n)£/ {x-qE\Y\q/[\Y\<x] + P(\ Y\ >x)} dx

n=1 m=Jm"

< C£nl(n)£{ma(1-q)-1E \ Y \ qI[ \Y \ < (m + 1)a] + ma-1P( \Y \ >ma)}

n=1 m=n

= C J2{ma(1-q)-1E \ Y\ qI[ \ Y\ < (m + 1)a] + ma-1P(\ Y \ > manl(n)

< C^ma(P-q)-1h(m)J2E\ Y\ qI[ka < \ Y \ < (k + 1)a]

m=1 k=1

+ ^J2map-1h(m)J2EI[ka < \ Y\ < (k + 1)a]

m=1 k=m

= Cj^E\ Y \ qI[ka < \ Y \ < (k + 1)a]X!ma(p-q)-1h(m)

k=1 m=k

+ ^^EI[ka < \ Y \ < (k + 1)a] ^map-1h(m)

k=1 m=1

< C^^kk"(p-q)h(k)E \ Y \ qI[ka < \ Y\ < (k + 1)a]

+ C ^kaph(k)EI[ka < \ Y \ < (k + 1)a]

< CE\ Y \ph( \ Y\1) < x.

For I21, ifp = 1, take q > max{1 + 5,2} by the same argument as above one gets for any 5 >0

I21 < CJ2{ma(1-q)-1E|Y|qI[| Y| < (m + 1)a] + ma-1P(|Y| > manl(n)

m=1 n=1

= C J2{ma(1-q)-1E|Y|qI[| Y| < (m + 1)a] + ma-1P(|Y| >ma)} ^n-1l(n)

m=1 n=1

< CJ2{ma(1-q)-1E|Y|qI[| Y| < (m + 1)a] + ma-1P(|Y| > man-1+a5h(n)

m=1 n=1

< CJ2{ma(1-q+5)-1h(n)E| Y|qI[|Y| < (m + 1)a] + ma(1+5)-1h(x)EI[|Y| > ma]}

< CE| Y|1+5h(| Y|«) < to. (3.13) It follows from (3.12) and (3.13) that, forp > 1,

I21 < to. (.14)

It remains to estimate I22 < to.

For I22, we consider the following two cases. If 1 < p <2, take q >2, note that ap + | -Op -1 = (ap -1)(1 - 2)< 0. Then by Cr inequality and Lemma 2.2, we obtain

TO fTO q q

I22 < CVn2 l(n) x-q{(E|Y |2I[| Y |<x])2 + xq(P(|Y | > x))2} dx

TO TO Am+1)a q q

< C^>ql(n)^ {x-q(E|Y|2I[|Y|<x])2 + (P(|Y| >x))2}dx

n=1 m=n m

< C^>2l(n)J2{ma(1-q)-1(E| Y|2I[|Y|< (m + 1)2])2 + ma-1(P(|Y| >ma))2}

n=1 m=n

= CY{ma(1-q)-1(E| Y|2I[|Y|<(m + 1)a])2 + ma-1(P(|Y| > ma))2 ^n2l(n)

m=1 n=1

< Cj2ma(p-q)+1-2h(m)(E|Y|2I[|Y| < (m + 1)a])2

+ C^map+2-2h(m)(EI[|Y| >ma])2

< Cj2map+2-apl-2h(m)(E| Y|p)2 < to. (3.15)

Ifp > 2, take q > a-1 > 2, which yields a(p - q) + 2 - 2 <-1. Then we get

I22 < CJ2№(1-q)-1(E|Y|21[|Y| < (m + 1)a])2

+ ma-1(P(|Y| > ma))2}J2nql(n)

< Cj^m"(p-q)+2-2h(m)(E\ Y\ 2I[ \Y\ < (m + 1)a])q

+ Cj2map+q-2h(m)(EI[ \Y\ > ma])q

< C^ma(p-q)+2-2h(m)(E\ Y\ 2)2 <x.

Hence, by (3.15) and (3.16) we get

I22 < x for p > 1. Moreover, by (3.14) and (3.17), we also get

I2 < x for p > 1. The proof of (3.2) is completed by (3.6), (3.10), and (3.18). Proof of (3.3) By Lemma 2.1 and (3.2), we have

(3.17)

(3.18) □

^Vp-2h(n)E sup

xx = n p-2h(n) P sup n=1 j0 V>n

x 2k -1 x /

= n p-2h(n) P sup

k=1 „=9k-1 J° V>n

> e + x I dx

k=1 n=2k-

x /.x /

< / M sup

k=^0 \i>2k-1

< C 2k( p

> e + x dx

> e + x dx n p-2h(n)

/ n=2k-1

0 \i>2k-1

> e + x dx

x x -x

C^ 2k(ap-1)h( 2k)^/ P

k=1 m=k 0

0 \2m-1<i<2m

> e + x dx

< C P max

I 2m-1<i<2m

+ ^d^ 2k(a p-1)h( 2k)

2m-1<i<2m

2m(ap-1)h( 2m) j P| (letting y = 2(m-1)a x)

xx < C 2m( p-1- )h 2m P <max

> (e + x)2(m-1)a dx

> e2(m-1)a + y\dy

to l>TO /

< cy"naP-2-ah(n) P max

tr io \i«<n

> 6 na 2-a + j I dy

CJ"nap-2-ah(n)E| max

^^ \ l<i<n

- 6'na < oo

where 6' = 62 a. Hence the proof of (3.3) is completed. □

Remark There are many sequences of dependent random variables satisfying (3.l) for all

q > 2.

Examples include sequences of NA random variables (see Shao [ll]), p*-mixing random variables (see Utev and Peligrad [l2]), y-mixing random variables (see Zhou [4]), and p-mixing random variables (see Zhou and Lin [5]).

Corollary 3.2 Under the assumptions of Theorem 3.l for any 6 > 0

y/p-2h(n)P( max

^ \ l<i<n

n=l \--

> 6n < to.

Proof As in Remark l.2 of Li and Zhang [3] we can obtain (3.l9). □

Competing interests

The author declares that they have no competing interests. Acknowledgements

This paper was supported by Wonkwang University in 2015.

Received: 13 April 2015 Accepted: 29 June 2015 Published online: 17 July 2015

References

1. Chow, YS: On the rate of moment complete convergence of sample sums and extremes. Bull. Inst. Math. Acad. Sin. 16,177-201 (1988)

2. Baum, LE, Katz, M: Convergence rates in the law of large numbers. Trans. Am. Math. Soc. 120(1), 108-123 (1965)

3. Li, YX, Zhang, LX: Complete moment convergence of moving average processes under dependence assumptions. Stat. Probab. Lett. 70,191-197 (2004)

4. Zhou, XC: Complete moment convergence of moving average processes under y-mixing assumption. Stat. Probab. Lett. 80,285-292 (2010)

5. Zhou, XC, Lin, JG: Complete moment convergence of moving average processes under p-mixing assumption. Math. Slovaca 61(6), 979-992 (2011)

6. Kuczmaszewska, A: On complete convergence in Marcinkiewicz-Zygmund type SLLN for negatively associated random variables. Acta Math. Hung. 28,116-130 (2010)

7. Peligrad, M: Convergence rates of the strong law for stationary mixing sequences. Z. Wahrscheinlichkeitstheor. Verw. Geb. 70,307-314(1985)

8. Peligrad, M, Gut, A: Almost sure results for a class of dependent random variables. J. Theor. Probab. 12, 87-104 (1999)

9. Stoica, G: A note on the rate of convergence in the strong law of large numbers for martingales. J. Math. Anal. Appl. 381,910-913 (2011)

10. Gut, A: Complete convergence for arrays. Period. Math. Hung. 25,51-75 (1992)

11. Shao, QM: A comparison theorem on moment inequalities between negatively associated and independent random variables. J. Theor. Probab. 13,343-356 (2000)

12. Utev, S, Peligrad, M: Maximal inequalities and an invariance principle for a class of weakly dependent random variables. J. Theor. Probab. 16,101-115 (2003)