Scholarly article on topic 'The moment of maximum normed randomly weighted sums of martingale differences'

The moment of maximum normed randomly weighted sums of martingale differences Academic research paper on "Mathematics"

0
0
Share paper
Academic journal
J Inequal Appl
OECD Field of science
Keywords
{""}

Academic research paper on topic "The moment of maximum normed randomly weighted sums of martingale differences"

Yao and Lin Journal of Inequalities and Applications (2015) 2015:264 DOI 10.1186/s13660-015-0786-1

O Journal of Inequalities and Applications

a SpringerOpen Journal

RESEARCH Open Access

^ CrossMark

The moment of maximum normed randomly weighted sums of martingale differences

Mei Yao1,2,3* and Lu Lin1,3

Correspondence: ymwalzn@163.com 1Shandong University Qilu Securities Institute for Financial Studies, Shandong University, Jinan, China

2Schoolof Mathematics, Hefei University of Technology, Hefei, China

Fulllist of author information is available at the end of the article

Abstract

By using some inequalities and properties of martingale differences, we investigate the moment of maximum normed randomly weighted sums of martingale differences under some weakly conditions. A sufficient condition to the moment of this stochastic sequence with maximum norm is presented in this paper.

MSC: 60F15; 60F25

Keywords: random weighted; maximum normed; martingale differences

1 Introduction

Let {Xn, n > 1} be a sequence, independent and identically distributed with EX\ = 0. Denote Sn = Y^,n=1 Xi, n > 1. For 0 < r <2 and p > 0, it is well known that

ft Spri

E\X1\r < œ, ifp < r,

E[\Xl\rlog(1+ \Xi\)]< œ, ifp = r,

E\X1\p < œ, ifp > r,

E\ sup

E( sup

(.2) (1.3)

ringer

are all equivalent. Marcinkiewicz and Zygmund [1] obtain (1.1) ^ (1.3) for the casep > r = 1, Burkerholder [2] gets (1.3) ^ (1.1) for the casep = r = 1, Gut [3] proves that (1.1)-(1.3) are equivalent in the case p > r, and Choi and Sung [4] show that (1.1)-(1.3) are equivalent in the casep < r. For 0 < r <2 andp >0, Chen and Gan [5] prove that (1.1)-(1.3) are equivalent under the dependent case such as p-mixing random variables.

By using the method of dominated by a nonnegative random variable, we investigate the randomly weighted sums of martingale differences under some weakly conditions. A sufficient condition to (1.2) and (1.3) is presented. To a certain extent, we generalize the result of Chen and Gan [5] for p-mixing random variables to the case of randomly weighted sums of martingale differences. For the details, please see our main result of Theorem 2.1 in Section 2.

© 2015 Yao and Lin. This article is distributed under the terms of the Creative Commons Attribution 4.0 InternationalLicense (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use,distribution, and reproduction in any medium, provided you give appropriate credit to the originalauthor(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Recall that the sequence {Xn, n > 1} is stochastically dominated by a nonnegative random variable X if

supP(|Xn| > t) < CP(X > t) for some positive constant C and all t > 0

(see Adler and Rosalsky [6] and Adler etal. [7]). A bound on tail probabilities for quadratic forms in independent random variables is seen by using the following condition. There exist C >0 and y >0 such that for all n > 1 andallx > 0, we have P(|Xn| > x) < C j™ e-Yt dt. For details, see Hanson and Wright [8] and Wright [9].

Meanwhile, the definition of martingale differences can be found in many books such as Stout [10], Hall and Heyde [11], Shiryaev [12], Gut [13], and so on. There are many results of martingale differences. For example, Ghosal and Chandra [14] gave the complete convergence of martingale arrays; Stoica [15,16] investigated the Baum-Katz-Nagaev-type results for martingale differences; Wang etal. [17] also studied the complete moment convergence for martingale differences; Yang et al. [18] obtained the complete convergence for the moving average process of martingale differences; Yang etal. [19] investigated the complete moment convergence for randomly weighted sums of martingale differences, etc.

On the other hand, randomly weighted sums have been an attractive research topic in the literature of applied probability. For example, Thanh and Yin [20] studied the almost sure and complete convergence of randomly weighted sums of independent random elements in Banach spaces; Thanh etal. [21] investigated the convergence analysis of double-indexed and randomly weighted sums of mixing processes and gave its application to state observers of linear-time-invariant systems; Kevei and Mason [22] and Hormann and Swan [23] studied the asymptotic properties of randomly weighted sums and self-normalized sums; Cabrera etal. [24] and Shen etal. [25] investigated the conditional convergence for randomly weighted sums; Gao and Wang [26] and Tang and Yuan [27] investigated the randomly weighted sums of random variables and have given an application to ruin theory and capital allocation; Chen [28] obtained some asymptotically results of randomly weighted sums of dependent random variables with dominated variation, and so on.

Throughout the paper, I(A) is the indicator function of set A and C, Ci, C2,... denote some positive constants not depending on n. The following lemmas are our basic techniques to prove our results.

Lemma 1.1 (cf. Hall and Heyde (Theorem 2.11 in [11])) If {Xi, Fi, 1 < i < n} is a martingale difference andp >0, then there exists a constant C depending only on p such that

Lemma 1.2 (cf. Adler and Rosalsky (Lemma 1 in [6]) and Adler et al. (Lemma 3 in [7])) Let {Xn, n > 1} be a sequence of random variables, which is stochastically dominated by a nonnegative random variable X. Then,for any a > 0 and b > 0, thefollowing two statements hold:

, n > 1.

£[|Xn|aI(|Xn| < b)] < Ci{£[XaI(X < b)] + baP(X > b)}, £[|Xn|aI(|Xn| > b)] < C2£[XaI(X > b)],

where C1 and C2 are positive constants not depending on n. Consequently, for all n > 1, one hasE\Xn\a < C3EXa, where C3 is a positive constant not depending on n.

2 The main result and its proof

Theorem 2.1 Let 0<r <2, 0 <p <2, and {Xn, Fn, n > 1} be a martingale difference sequence, which is stochastically dominated by a nonnegative random variable X such that

for p < r,

for p = r,

for p > r,

C1EX < to, if 0 < r < 1, C2E[Xlog(1 + X)] < to, ifr = 1, C3 EXr < to, ifr > 1, C4EX < to, if 0 < r < 1, C5E[Xlog2(1 + X)] < to, ifr = 1, C6E[Xr log(1 + X)] < to, ifr > 1, C7EX < to, if 0 <p <1, C8E[Xlog(1 + X)] < to, ifp = 1, C9EXp < to, ifp > 1.

Assume that {An, n > 1} is an independent sequence of random variables, which is also independent of the sequence {Xn, n > 1}. In addition, it is assumed that

Y.EA1 = O(n).

LetSn = ^n=i AiXi, n > 1. Then one has the result (1.3), which implies the result (1.2).

Remark 2.1 In Theorem 2.1, {AnXn, Fn, n > 1} may be not a martingale difference, since An is not required to be measurable with respect to Fn-1. We use the property of independence and the method of martingales to study the moments of maximum normed (1.2) and (1.3) and give a sufficient condition (2.1) for them.

Proof It can be argued that

El sup

> t1/p dt

/•to to /

/ Ep( m

Jv/rj~[ \2k-1

P| max

t1/p dt

t to TO / \

2p/r W YH max \Sn \ > 2(k-1)/rt1/p ) dt (let s = 2(k-1)p/rt)

J2f/r V<n<2k '

TO r TO . .

2p/r + 2p/ry 2-kp/r p( max \Sn \ > s1/p ) ds. (2.3)

2kp/r 1<n<2k

Let G0 = {0, Gn = a(X1, ...,Xn), n > 1, and XSi = XI(\Xi\ < s1/p), where s1/p > 0. It can be argued that

AiXi = AiXil(\Xi \ > + [AiXSi - E(AiXSi + E(AiXsi \Gi_1), 1 < i < n.

> s1/p/2j ds

to fto / \

V-2-kp/M P( max |Sn| > s1/p)ds

hkp/r V<n<2k /

to / n

<V 2-kp/r P max V AiXil( |Xi| > s1/p)

tT kp/r V<n<2k t^

to to / n

+ /L2-kp/rJ2tp/rpl Im<x, E№ -E(AiXsi|^i-1)]

to to / n

V2-kp/r / P max V E(AiXsi |GU)

f-f j2kp/r \1<n<2k ^

=: /1 + /2 + /3.

> s1/p/4 ds

> s1/p/4 ds

Combining Holder's inequality with (2.2), we have

n I n \ 1/2 / n \ 1-1/2

^E|Ai|^ EA2 £1 = °(n).

In view of {An, n > 1} is independent of the sequence {Xn, n > 1}; by Markov's inequality, (2.5), and Lemma 1.2, we get

to /. to I

/1 < ^2-kB/W s-1/pE max t/ 2kp/r \1

\1<n<2k

¿AXi/(|Xi|

to r to / 2k \

< 2j22-kp/r s-1/p^E|Ai|E|Xi|^ |Xi| > s1/p) ds

k=1 ^2kp/r V i=1 /

to to .i2t-ml+1)p/r

< C1V 2k-kp/r W s-1/pe[X/(X > s1/p)] ds

k=1 m=k'/2mp/r

< C2£ 2k-kp/rJ2 2mp/r-m/rE[XI(X > 2m/r)]

k=1 m=k

= C2£ 2mp/r-m/rE[XI(X > 2m/r)] Y^ 2k-kp/r

m=1 k=1

' C3 ETO=12m-m/rE[X/(X > 2m/r)], ifp < r, C4 ETO=1 m2m-m/rE[X/(X > 2m/r)], ifp = r, C5 ETO=i 2mp/r-m/rE[X/(X > 2m/r)], ifp > r.

For the case p < r, if 0 < r <1, then

J22m-m/rE[X/(X > 2m/r)]

= ^2m-m/r^E[X/(2k/r < r < 2(k+1)/r)]

m=1 k=

= £E[X/(2k/r < X < 2(k+1)/r2m(1-1/r)

k=1 m=1

< C1J2E[X/(2k/r < X < 2(k+1)/r)] < C1EX. k=1

If r = 1, then

J^2m-m/rE[X/(X > 2m/r)]

TO TO TO

= £E[X/(X > 2m)] = £ £E[X/(2k < X < 2k+1)]

to k to

= £E[X/(2k <X < 2k+1)^ 1 = £kE[XI(2k < X < 2k+1)]

k=1 m=1 k=1

< C^E[Xlog(1 + X)/(2k < X < 2k+1)]

< C2E[Xlog(1+ X)]. Otherwise, for r >1, one has

J22m-m/rE[X/(X > 2m/r)] 1

= J2 2m-m/r ^E[X/(2k/r < X < 2(k+1)/r)]

m=1 k=

= J2E[X/(2k/r < X < 2(k+1)/r)] ^ 2m-m/r

k=1 m=1

< C^2k-k/rE[X^(2k/r < X < 2(k+1)/r)]

< C3 ^ E[Xr/(2k/r < X < 2(k+1)/r)] < C3EXr.

Similarly, for the case p = r, if 0 < r <1, then

J^m2m-m/rE[X/(X > 2m/r)]

= J2 m2m-m/r ^E[X/(2k/r < X < 2(k+1)/r)]

m=1 k=m

= £E[X/(2k/r < X < 2(k+1)/rm2m(1-1/r)

k=1 m=1

< C^^] E[X(2k/r < X < 2(k+1)/r)] < C4EX. k=1

If r = 1, then

J^m2m-m/rE[XI(X > 2m/r)]

= £mJ^E[XI(2k < X < 2k+1)]

m=1 k=

to k to

= £E[XI(2k <X < m < C5J2k2E[XI(2k <X < 2^)]

k=1 m=1 k=1

< C5 ^E[Xlog2(1 + X)I(2k <X < 2^)] < C5E[Xlog2(1 + X)].

Otherwise, for r > 1, it follows that

J^m2m-m/rE[XI(X > 2m/r)]

= J2m2m-m/rJ2E[XI(2k/r < X < 2(k+1)/r)]

m=1 k=m

<J2E[XI(2k/r < X < 2(k+1)/r)]k J2 2m-m/r

k=1 m=1

< C^k2k-k/rE[XI(2k/r < X < 2(k+1)/r)] < C7E[Xr log(1 + X)].

On the other hand, for the casep > r, if 0 <p <1, then

Y^2mp/r-m/rE[XI(X > 2m/r)]

^2m(p-1)/^E[X^(2k/r < X < 2(k+1)/r)]

m=1 k=m

r m(p-1)/r

= £E[XI(2k/r < X < 2(k+1)/r)] 2

k=1 m=1

< C^E[XI(2k/r < X < 2(k+1)/r)] < C8EX.

If p = 1, then

J22mp/r-m/rE[XI(X > 2m/r)]

= E[XI(2k/r <X< 2(k+1)/r)]

m=1 k=m

= £E[XI(2k/r < X < 2(k+1)/r)] ^ 1

= £kE[Xl(2k/r < X < 2(k+1)/r)] k=1

< C9EXlog(1+ X). For p >1, one has

J22mp/r-m/rE[XI(X > 2m/r)]

= £-2m(p-1)/r^E[Xl(2k/r < X < 2(k+1)/r)]

m=1 k=m

= £E[XI(2k/r < X < 2(k+1)/r)] 2m(p-1)/r

k=1 m=1

< C10£ 2k(p-1)/rE[Xl(2k/r < X < 2(k+1)/r)]

< C^E[XpI(2k/r < X < 2(k+1)/r)] < C10EXp.

Consequently, in view of (2.6), the conditions of Theorem 2.1, and the inequalities above, we have

Q£œ=12m-m/rE[XI(X > 2m/r)], ifp <r, C2 Eœ=1 m2m-m/rE[XI(X > 2m/r)], ifp = r, C3 Eœ=12mp/r-m/rE[XI(X > 2m/r )], if p > r

for p < r,

for p = r,

for p > r,

C4EX < œ, if 0 < r <1, C5E[Xlog(1+ X)] < œ, if r = 1, C6EXr < œ, if r >1, C7EX < œ, if 0 < r <1, C8E[X log2(1 + X)] < œ, if r = 1, C9E[Xr log(1 + X)] < œ, if r >1, C10EX < œ, if 0 < p <1, CUE[Xlog(1 + X)] < œ, ifp = 1, CnEXp < œ, ifp >1.

It can be checked that for the fixed real numbers a1,...,an {aiXSi - E(aiXSi |GW), G,1 < i < n} is also a martingale difference. So one has by Lemma 1.1

E max ^^[AiXsi -E(AiXsi\Gi-1)]

= EIEI ma^^[diXSi - E(aiXSi \Gi_1)]

A1 = al,..., An = an

< CE L E(a?Xl) A1 = a1,..., An = an J

= EA2 EXs2i, (.8)

by using the fact that {A1,...,An} is independent of {Xs1,...,Xsn}. Consequently, by Markov's inequality, (2.2), (2.8), and Lemma 1.2, one can check that

TO />TO I I

C1Yj2~kp/r \ s-2/pE max V[A^ -E(AiXsi\G^\ ds

t! kp/r l1<n<2ki=^ 'J

TO /.to I 2k \

C^2-kp/r E EA2EX2jds

TO /.TO

C3 V 2-*p/r+M s-2/pe[X2I(X < s1/p)] ds

k=1 2kp/r

TO /»TO

C4V2-kB/r+k / ^X > s1/p)<

+ C*2^2-kp I P(X >s1/p) ds k=1

=: C3I21 + C4I22. (.9)

For I21, it follows that

TO TO 2(m+1)p/r

I21 = £ 2-kp/r+k W s-2lpE[X2I(X < s1/p)] ds

tnmp/r

k=1 m=k 2mp/r

< 2-kp/r+k 2mp/r-2m/rE X2I X < 2(m+1)/r k=1 m=k

}(m+1)/r )

= ^2m(p-2)/rE[X2I(X < 2(m+1)/r)] ^ 2k(1-p/r).

Ifp < r, one has by r <2

^2m(p-2)/rE[X2I(X < 2(m+1)/r)] ^ 2k(1-p/r)

m=1 k=1

< Q ^2m(r-2)/rE[X2I(X < 2(m+1)/r)]

= C1^2m(r-2)/rE[X2I(X < 21/r)]

+ C1 £2m(r-2)/^E[X2I(2i/r <X < 2(i+1)/r)]

m=1 i=1

< C2 + Q £E[X2I(2i/r < X < 2(i+1)/r)] £ 2m(r-2)/r

i=1 m=i

< C2 + C32(2-r)/^ 2(i+1)(r-2)/rE[X2/(2i/r < X < 2(i+1)/r)]

(2-r)/r 2(i+1)(r-2)/rE X2I 2i/r i=1

< C4 + C5EXr.

For the case p = r,by r <2, one has

J^2m(p-2)/rE[X2/(X < 2(m+1)/r)] ^2k(1-p/r)

m=1 k=1

< C1J2m2m(r-2)/rE[X2/(X < 2(m+1)/r)]

= C^ m 2m(r-2)/rE [X2/(X < 21/r)]

+ C^m2m(r-2)/^E[X2/(2i/r < X < 2(i+1)/r)]

m=1 i=1

< C2 + C1J2E[X2^2i/r < X < 2(i+1)/r)] ^m2m(r-2)/r

i=1 m=i

< C2 + C32(2-r)/^i2(i+1)(r-2)/rE[X2/(2i/r < X < 2(i+1)/r)]

< C4 + C5E[Xr log(1+ X)].

For the case p > r, it can be checked byp <2 that

^2m(p-2)/rE[X2/(r < 2(m+1)/r)] < CEXp.

Consequently, it follows that

C1 ETO=12m(r-2)/rE[X2/(X < 2(m+1)/r)], ifp < r, /21 < { C2 ETO=1 m2m(r-2)/rE[X2/(X < 2(m+1)/r)], ifp = r, C3 ETO=12m(p-2)/rE[X2/(r < 2(m+1)/r)], ifp > r

C4 + C5EXr < to, ifp < r, C6 + C7E[Xr log(1 + X)] < to, ifp = r, C8EXp < to, ifp > r.

On the other hand, similar to the proofs of (2.6) and (2.7), we obtain

TO /> TO

2k-kp/r s-1/pE XI X > s1/p ds k=1 2kp/r

< C2^2 2mp/r-m/rE[X/(X > 2m/r)] ^ 2k-kp/r

C3ETO=12m-m/rE[X/(X > 2m/r)], ifp < r, C4 ETO=1 m2m-m/rE[X/(X > 2m/r)], ifp = r, C5 ETO=12mp/r-m/rE[X/(X > 2m/r)], ifp > r.

(2.10)

(2.11)

For the case p < r, if 0 < r <1, then

J^2m-m/rE[XI(X > 2m/r)]

= J2 2m-m/r ^E[XI(2k/r < X < 2(k+1)/r)]

m=1 k=m

= J2E[XI(2k/r < X < 2(k+1)/r)] 2m(1-1/r)

k=1 m=1

< C1 E XI 2k/r < X < 2(k+1)/r < C1EX. k=1

If r = 1, then

J^2m-m/rE[XI(X > 2m/r)]

TO TO TO

= £E[XI(X > 2m)] = £ J2E[XI(2k < X < 2k+1)]

m=1 m=1 k=

to k to

= £E[X/(2k <X < 2k+1)^ 1 kE[XI(2k < X < 2k+1)]

k=1 m=1 k=1

< C2J2E[Xlog(1 + X)I(2k < X < 2k+1)] < C2E[Xlog(1 + X)].

Otherwise, for r > 1, we have

J^2m-m/rE[XI(X > 2m/r)]

= J2 2m-m/r ^E[XI(2k/r < X < 2(k+1)/r)]

m=1 k=m

= J2E[XI(2k/r < X < 2(k+1)/r)] ^ 2m-m/r

k=1 m=1

< C^ 2k-k/rE[XI(2k/r < X < 2(k+1)/r)]

< C3J2E[XrI(2k/r < X < 2(k+1)/r)] < C3EXr.

Similarly, for the case p = r, if 0 < r <1, then

J2m2m-m/rE[XI(X > 2m/r)]

= J2 m2m-m/r ^E[XI(2k/r < X < 2(k+1)/r)]

= £E[X/(2k/r < X < 2(k+1)/rm2'

m(1-1/r)

< C4J2E[X/(2k/r < X < 2(k+1)/r)] < C4EX.

For r = 1, it follows that

J^m2m-m/rE[X/(X > 2m/r)]

= £ m£>[X/ (2k < X < 2k+1)]

m=1 k=

= £E[X/(2k <X < 2k+1)^m < C^k2E[X/(2k <X < 2k+1)]

< C5 ^E[Xlog2(1 + X)/(2k <X < 2k+1)] < C5E[Xlog2(1 + X)].

Otherwise, for r > 1, it follows that

J^m2m-m/rE[X/(X > 2m/r)] 1

= J2 m2m-m/r ^E[X/(2k/r < X < 2(k+1)/r)]

m=1 k=m

<J2E[X/(2k/r < X < 2(k+1)/r)] ^m2m-m/r

< C^k2k-k/rE[X/(2k/r < X < 2(k+1)/r)] < C7E[Xr log(1 + X)].

Therefore, by (2.7) for casep > r, (2.11), and the inequalities above, we obtain

C1ETOU 2m-m/rE[X(X > 2m/r)], ifp < r, C2 ETO= m2m-m/rE[X(X > 2m/r)], ifp = r, C3 ETO=12mp/r-m/rE[X/(X > 2m/r)], ifp > r

for p < r,

for p = r,

for p > r,

C4EX < to, if 0 < r <1, C5EXlog(1+ X)]< to, if r = 1, C6EXr < to, if r >1, C7EX < to, if 0 < r <1, C8E[Xlog2(1 + X)] < to, if r = 1, C9E[Xr log(1 + X)] < to, if r >1, C10EX < to, if 0 < p <1, CUE[Xlog(1 + X)] < to, ifp = 1, C12EXP < to, if p >1.

(2.12)

Obviously, it can be seen that {Xn, Gn, n > 1} is also a martingale difference, since {Xn, Fn, n > 1} is a martingale difference. Combining with the fact that {An, n > 1} is independent of {Xn, n > 1}, we have

£(AX!G_I)=£[£(AX|G)|GU] = £[X;£(A; |G)|G_i] = £Ai£[Xi |G_i]=0, a.s.,1 < i < n.

In view of the proofs of (2.6), (2.7), and the inequality above, we obtain by Markov's inequality and (2.5)

Jo < 4

to f^l n Y2-kp/r s-1/p£ max V£(AiXSi!G-1) tr kP/r \1sn<2k tT

to / n

4j^2-iplr s-1/p£ max V£(AtXJ(IX |< s1/p) |G-i)

tT kp/r V<n<2ktr

to to / n

4yVkp/r/ s-1/p£ max V^AtXtJ(|Xt| > s1/p) IG-x)

kp/r V<n<2k1=r

to /»to 2k

4j22-kp/r s-1/p^£!Ai!£|Xi!J( |Xt! > s1/p) ds k:1 2kp/r i:1

to /> to

Ciy2k-kP/r / s 1/p£[XJ(X > s1/p)] 7"";/ J2kp/r

for p < r,

for p : r,

for p > r,

>sUp)J ds

Ci£X < to, if 0 < r <1, C2£[X log(1 + X)] < to, if r = 1, Co£Xr < to, if r >1, C4£X < to, if 0 < r <1, C5£[Xlog2(1 + X)] < to, if r = 1, C6£[Xr log(1 + X)] < to, if r >1, C7£X < to, if 0 < p <1, C8£[Xlog(1 + X)] < to, ifp = 1, C9£Xp < to, if p >1.

(2.13)

Consequently, in view of (2.1), (2.3), (2.4), (2.7)-(2.10), (2.12), and (2.13), one has (1.3). By (1.3), it is easy to get (1.2). □

Competing interests

The authors declare that they have no competing interests.

Authors' contributions

Allauthors read and approved the manuscript.

Author details

1 Shandong University Qilu Securities Institute for FinancialStudies, Shandong University, Jinan, China. 2Schoolof Mathematics, Hefei University of Technology, Hefei, China. 3Schoolof Mathematics, Shandong University, Jinan, China.

Acknowledgements

The authors are deeply gratefulto the editor and two anonymous referees, whose insightfulcomments and suggestions have contributed substantially to the improvement of this paper. This work is supported by the NationalNaturalScience Foundation of China (11171188), NationalSocialScience Fund of China (14ATJ005), the Humanities and SocialScience

Planning Foundation of Ministry of Education of China (14YJCZH155) and the Fundamental Scientific Research Funds for the Central Universities (2015HGZX0018).

Received: 12 April 2015 Accepted: 13 August 2015 Published online: 28 August 2015 References

1. Marcinkiewicz, J, Zygmund, A: Sur les fonctions independantes. Fundam. Math. 29,60-90 (1937)

2. Burkerholder, DL: Successive conditional expectations of an integrable function. Ann. Math. Stat. 33(3), 887-893 (1962)

3. Gut, A: Moments of the maximum of normed partial sums of random variables with multidimensional indices. Z. Wahrscheinlichkeitstheor. Verw. Geb. 64, 205-220 (1979)

4. Choi, BD, Sung, SH: On moment conditions for the supremum of normed sums. Stoch. Process. Appl. 26, 99-106 (1987)

5. Chen, PY, Gan, SX: On moments of the maximum of normed partial sums of p-mixing random variables. Stat. Probab. Lett. 78(10), 1215-1221 (2008)

6. Adler, A, Rosalsky, A: Some general strong laws for weighted sums of stochastically dominated random variables. Stoch. Anal. Appl. 5(1), 1-16(1987)

7. Adler, A, Rosalsky, A, Taylor, RL: Strong laws of large numbers for weighted sums of random elements in normed linear spaces. Int. J. Math. Math. Sci. 12(3), 507-530 (1989)

8. Hanson, DL, Wright, FT: A bound on tail probabilities for quadratic forms in independent random variables. Ann. Math. Stat. 42(3), 1079-1083 (1971)

9. Wright, FT: A bound on tail probabilities for quadratic forms in independent random variables whose distributions are not necessarily symmetric. Ann. Probab. 1 (6), 1068-1070 (1973)

10. Stout, WF: Almost Sure Convergence. Academic Press, New York (1974)

11. Hall, P, Heyde, CC: Martingale Limit Theory and Its Application. Academic Press, New York (1980)

12. Shiryaev, AN: Probability, 2nd edn. Springer, New York (1996)

13. Gut, A: Probability: A Graduate Course. Springer, Berlin (2005)

14. Ghosal, S, Chandra, TK: Complete convergence of martingale arrays. J. Theor. Probab. 11 (3), 621-631 (1998)

15. Stoica, G: Baum-Katz-Nagaev type results for martingales. J. Math. Anal. Appl. 336(2), 1489-1492 (2007)

16. Stoica, G: A note on the rate of convergence in the strong law of large numbers for martingales. J. Math. Anal. Appl. 381(2), 910-913 (2011)

17. Wang, XJ, Hu, SH, Yang, WZ, Wang, XH: Convergence rates in the strong law of large numbers for martingale difference sequences. Abstr. Appl. Anal. 2012, Article ID 572493 (2012)

18. Yang, WZ, Hu, SH, Wang, XJ: Complete convergence for moving average process of martingale differences. Discrete Dyn. Nat. Soc. 2012, Article ID 128492 (2012)

19. Yang, WZ, Wang, YW, Wang, XH, Hu, SH: Complete moment convergence for randomly weighted sums of martingale differences. J. Inequal. Appl. 2013,396 (2013)

20. Thanh, LV, Yin, G: Almost sure and complete convergence of randomly weighted sums of independent random elements in Banach spaces. Taiwan. J. Math. 15(4), 1759-1781 (2011)

21. Thanh, LV, Yin, G, Wang, LY: State observers with random sampling times and convergence analysis of double-indexed and randomly weighted sums of mixing processes. SIAM J. Control Optim. 49(1), 106-124 (2011)

22. Kevei, P, Mason, DM: The asymptotic distribution of randomly weighted sums and self-normalized sums. Electron. J. Probab. 17,46 (2012)

23. Hormann, S, Swan, Y: A note on the normal approximation error for randomly weighted self-normalized sums. Period. Math. Hung. 67(2), 143-154 (2013)

24. Cabrera, MO, Rosalsky, A, Volodin, A: Some theorems on conditional mean convergence and conditional almost sure convergence for randomly weighted sums of dependent random variables. Test 21(2), 369-385 (2012)

25. Shen, AT, Wu, RC, Chen, Y, Zhou, Y: Conditional convergence for randomly weighted sums of random variables based on conditional residual ft-integrability. J. Inequal. Appl. 2013,122 (2013)

26. Gao, QW, Wang, YB: Randomly weighted sums with dominated varying-tailed increments and application to risk theory. J. Korean Stat. Soc. 39(3), 305-314 (2010)

27. Tang, QH, Yuan, ZY: Randomly weighted sums of subexponential random variables with application to capital allocation. Extremes 17(3), 467-493 (2014)

28. Chen, DY: Randomly weighted sums of dependent random variables with dominated variation. J. Math. Anal. Appl. 420(2), 1617-1633 (2014)