Scholarly article on topic 'Precise asymptotics in the law of the iterated logarithm for R/S statistic'

Precise asymptotics in the law of the iterated logarithm for R/S statistic Academic research paper on "Mathematics"

0
0
Share paper
Academic journal
J Inequal Appl
OECD Field of science
Keywords
{""}

Academic research paper on topic "Precise asymptotics in the law of the iterated logarithm for R/S statistic"

O Journal of Inequalities and Applications

a SpringerOpen Journal

RESEARCH Open Access

Precise asymptotics in the law of the iterated logarithm for R/S statistic

Tian-Xiao Pang1, Zheng-Yan

"Correspondence: hwang0412@naver.com

1 Department of Mathematics, Zhejiang University, Yuquan Campus, Hangzhou, 310027, PR. China

2SchoolofGeneralEducation,

Yeungnam University, Gyeongsan,

Gyeongbuk 712-749, Korea

Lin1 and Kyo-Shin Hwang1,2

Abstract

Let {X,Xn,n > 1} be a sequence of i.i.d. random variables which is in the domain of

attraction of the normal law with zero mean and possibly infinite variance,

Q(n) = R(n)/S(n) be the rescaled range statistic, where R(n) = max1<^<n^j=1 (Xj -Xn)} -

min^nE^ (Xj - Xn)}, S2(n) = (Xj - Xn)2/n and Xn = Ej=1 Xj/n. Then two precise asymptotics related to probability convergence for Q(n) statistic are established under some mild conditions in this paper. Moreover, the precise asymptotics related to almost surely convergence for Q(n) statistic is also considered under some mild conditions. MSC: 60F15; 60G50

Keywords: domain of attraction of the normal law; law of the iterated logarithm; precise asymptotics; R/S statistic

ft Spri

ringer

1 Introduction and main results

Let {X, Xn, n > 1} be a sequence of i.i.d. random variables and set Sn = Yj=i Xj for n > 1, logx = ln(xve) andloglogx = log(logx).HsuandRobbins [1] andErdos [2] establishedthe well known complete convergence result: for any e > 0, ^ JTOi P( |Sn |> en)< to if and only if EX = 0 and EX2 < to. Baum and Katz [3] extended this result and proved that, for 1 < p <2, e > 0 and r > p, £TOi nr-2P(|Sn | > en1/p) < to holds if and only if EX = 0 and E|X|rp < to. Since then, many authors considered various extensions of the results of Hsu-Robbins-Erdos and Baum-Katz. Some of them studied the precise asymptotics of the infinite sums as e ^ 0 (cf. Heyde [4], Chen [5] and Spataru [6]). We note that the above results do not hold for p = 2, this is due to the fact that P(|Sn| > en1/2) ^ P(|N(0,1)| > e/EX2) by the central limit theorem when EX = 0, where N(0,1) denotes a standard normal random variable. It should be noted that P(|N(0,1) | > e/EX2) is irrespective of n. However, if n1/2 is replaced by some other functions of n, the results of precise asymptotics may still hold. For example, by replacing n1/2 by Vn log log n, Gut and Spataru [7] established the following results called the precise asymptotics in the law of the iterated logarithm.

Theorem A Suppose {X, Xn, n > 1} is a sequence of i.i.d. random variables with EX =0, EX2 = a2 and EX2(loglog |X|)*+S < to for some S >0, and let an = O(v/«/(log log n)Y) for some y > 1/2. Then

limV e2 -1V-P( |Sn| > ea J 2n log log n + an) = 1.

e\1 f n

©2014 Pang et al.; licensee Springer. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the originalworkis properly cited.

Theorem B Suppose {X,Xn, n > 1} is a sequence of i.i.d. random variables with EX = 0 and EX2 = a2 < to. Then

lim s2 ^-P( |Sn| > saJ n log log n) = 1.

s\0 t-f n log n y '

n=1 °

Of lately, by applying strong approximation method which is different from Gut and Spataru's, Zhang [8] gave the sufficient and necessary conditions for this kind of results to be held. One of his results is stated as follows.

Theorem C Let a >-1 and b > -1/2 and let an(s) be a function of s such that

an(s) log log n ^ t asn ^to and s \ Vfl + 1. Suppose that

EX =0, EX2= a2< to and EX2(log |X|)a(loglog |X|)b-1 < to (1.1)

EX2/{ |X |>t} = o( (loglog t)'1) ast ^to. (.2)

lim (e2 _ (a + 1))b+1/2 £ (logn)a(loglogn)b p(Mn > (e + an(,))V2a2nloglogn)

e\V a+1 ,

= 2/ —-- exp (-2 tvOTI )r(b + 1/2) (1.3)

Y n (a + 1)

,• ( 2 , 1 \)b+1/2 -A (log n)a(loglog n)bD (ICI , , —;-)

lim [e2-(a + 1)) ^-P(|5n|> (e + an(e))^2o2nloglogn)

n (a +1)

exp(-2rV a + 1)r(b + 1/2). (1.4)

Here Mn = max^<n S |, and here and in whatfollows r(-) is a gamma function. Conversely, if either (1.3) or (1.4) holds for a >-1, b > -1/2 and some 0 < a < to, then (1.1) holds and

liminf(loglog t)EX2/{|X| > t} =0.

It is worth mentioning that the precise asymptotics in a Chung-type law of the iterated logarithm, law of logarithm and Chung-type law of logarithm were also considered by Zhang [9], Zhang and Lin [10] and Zhang [11], respectively.

The above-mentioned results are all related to partial sums. This paper is devoted to the study of some precise asymptotics for the rescaled range statistic (or the R/S statistic), defined by Q(n) = R(n)/S(n), where

jr(k) = maxi^niE/UX; - Xn)} - mini<k<n{Xj=i(X; -Xn)},

|s2(n) = ir;=i(Xj-Xn)2, Xn = ir;=iX;. ('>

This statistic, introduced by Hurst [i2] when he studied hydrology data of the Nile river and reservoir design, plays an important role in testing statistical dependence of a sequence of random variables and has been used in many practical subjects such as hydrology, geophysics and economics, etc. Because of the importance of this statistic, some people studied some limit theorems for R/S statistic. Among them, Feller [i3] established the limit distribution of R(h)/^/; for i.i.d. case, Mandelbrot [i4] studied weak convergence of Q(n) for a more general case, while Lin [i5-i7] and Lin and Lee [i8] established the law of the iterated logarithm for Q(n) under various assumptions. Among Lin's results, we notice that Lin [i5] proved that

limsup /-Q(n) = i a.s. (i.6)

n—V nlog logn

holds only if {X, Xn, n > i} is a sequence of i.i.d. random variables which is in the domain of attraction of the normal law with zero mean.

Recently, based on applying a similar method to the one employed by Gut and Spataru [7], a result related to the precise asymptotics in the law of the iterated logarithm for R/S statistic was established by Wu and Wen [i9], that is, we have the following.

Theorem D Suppose {X, Xn, n > i} is a sequence of i.i.d. random variables with EX = 0, EX2 < to. Then for b > -i,

lim £2(b+i) £ (lOgi^ p(q(;) > s^2nOgOg-n) = -EYr—,- (i.7)

e\0 n log n VVW- v B B ) 2b+i(b + i) v '

Here and in what follows, we denote Y = sup0<t<i B(t)- inf0<t£i B(t) andB(t) be a standard Brownian bridge.

It is natural to ask whether there is a similar result for R/S statistic when e tends to a constant which is not equal to zero. In the present paper, the positive answer will be partially given under some mild conditions with the help of strong approximation method, and, since R/S statistic is defined in a self-normalized form, we will not restrict the finite-ness of the second moment for {X,Xn, n > i}. Moreover, a more strong result than Wu and Wen's is established in this paper, based on which, a precise asymptotics related to a.s. convergence for Q(n) statistic is considered under some mild conditions. Throughout the paper, we denote C a positive constant whose value can be different in different places. The following are our main results.

Theorem 1.1 Suppose {X, Xn, n > i} is a sequence of i.i.d. random variables which is in the domain of attraction of the normal law with EX =0, and the truncated second moment

l(x) = EX2I{|X| < x} satisfies l(x) < c1 exp(c2(logx)3) for some c1 > 0, c2 > 0 and 0 < 3 <1. Let -1 < a < 0, b >-2 and an(e) be a function of e such that

a„(e) log log n ^ t as n ^to and e \Va + 1/2. (1.8)

Then we have

lim (4e2 - (a + 1))b+2 f (logn)a(loglogn)b P(Q(n) > (e + an(e))^2nloglogn)

= 4(a + 1)r(b + 2)exp(-4W a + 1). (1.9)

Theorem 1.2 Suppose {X, Xn, n > 1} is a sequence ofi.i.d. random variables which is in the domain of attraction of the normal law with eX =0, and the truncated second moment l(x) = EX2I{|X| < x} satisfies l(x) < c1 exp(c2(logx)3) for some c1 > 0, c2 > 0 and 0 < ¡3 <1. Then for b > -1, (1.7) is true.

Theorem 1.3 Suppose {X, Xn, n > 1} is a sequence ofi.i.d. random variables which is in the domain of attraction of the normal law with eX = 0, and l(x) satisfies l(x) < c1 exp(c2(log x)3) for some c1 > 0, c2 > 0 and 0 < ¡3 <1. Then for any b > -1, we have

lime2(b+1) f (loglogn)b11Q(n) > } = EYT^ a.s.

e\0 n log n lQ( ^ V S S > 2b+1(b + 1)

Remark 1.1 Note that X belonging to the domain of attraction of the normal law is equivalent to l(x) being a slowly varying function at to. We note also that l(x) < c1 exp(c2(log x)3) is a weak enough assumption, which is satisfied by a large class of slowly varying functions such as (loglogx)a and (logx)a, for some 0 < a < to.

Remark 1.2 When EX2 = a2 < to, the truncated second moment l(x) automatically satisfies the condition l(x) < c1 exp(c2(logx)3) for some c1 > 0, c2 > 0 and 0 < ¡3 <1. Hence, Theorems 1.1-1.3 not only hold for the random variables with finite second moments, but they also hold for a class of random variables with infinite second moments. Especially, Theorem 1.2 includes Theorem D as a special case.

Remark 1.3 From Theorem C, one can see that the finiteness of the second moment does not guarantee the results about precise asymptotics in LIL for partial sums when a >0. Moreover, it is clear that R/S statistic is more complicated than partial sums. Hence, it seems that it is not possible, at least not easy, to prove (1.9) for a >0 under the conditions stated in Theorem 1.1 only. However, if we impose more strong moment conditions which are similar to (1.1) and (1.2) on {X,Xn, n > 1}, it would be possible to prove (1.9) for a > 0, by following the ideas in Zhang [8].

Remark 1.4 Checking the proof of Theorem 1.1, one can find that

lim (e2 - (a + 1))b+2 f (logn)a(loglogn)b p(q(«) > (e + ^(e^nloglogn/2)

e\V a+1 , n

= 4 (a + 1)r(b + 2) exp(-4t V a + 1)

holds if an(e) log log n ^ t' as n ^to and e \ Va + 1, which seems maybe more natural due to (1.6).

The remaining of this paper is organized as follows. In Section 2, Theorem 1.1 will be proved when {X, Xn, n > 1} is a sequence of normal variables with zero mean. In Section 3, truncation method and strong approximation method will be employed to approximate the probability related to R(n) statistic. In Section 4, Theorem 1.1 and Theorem 1.2 will be proved, while in Section 5 the proof of Theorem 1.3 will be given, based on some preliminaries.

2 Normal case

In this section, Theorem 1.1 in the case that {X, Xn, n > 1} is a sequence of normal random variables with zero mean is proved. In order to do it, we firstly recall that B(t) is a standard Brownian bridge and Y = sup0<t<1 B(t) - inf0<t<1 B(t). The distribution of Y plays an important role in our first result, and, fortunately, it has been given by Kennedy [20]:

P(Y <x) = 1-2£(4x2n2 - 1)exp(-2x2n2). (.1)

Now, the main results in this section are stated as follows. Proposition 2.1 Let a >-1, b >-2 and an(e) be a function of e such that

an(e) log log n ^ t asn ^to and e \Va + 1/2. (2.2)

Then we have

lim>2 - (a + 1))b+2 £ (logn)a(l°glogn)b

• P(Y > (e + an(e))V2 log log n)

= 4 (a + 1)r(b + 2)exp(-4T\/ a + 1). Proof Firstly, it follows easily from (2.1) that

P(Y > x) - 8x2 exp(-2x2) as x ^ +to. Then, by condition (2.2), one has

P( Y > (e + an(e))j2 log log n) (e + an(e)f log log n • e - 16e2 log log n • exp(-4e2 log log n) exp(-8ean(e) log log n)

-16(e + an(e))2 loglogn • exp(e + an(e))2 loglogn)

as n ^to uniformly in e e (*Ja + 1/2, Va + 1/2 + S) for some S >0. Hence, for above-mentioned S >0 and any 0 < 0 <1, there exists an integer n0 such that, for all n > n0 and

s e (Va + 1/2, Va + 1/2 + 5),

4(a + 1) log log n ■ exp(-4s2 log log n) exp(-4t\/a + 1 - 0) < P(Y > (s + an(s)y2loglogn)

< 4(a + 1) log log n ■ exp(-4e2 loglog n) exp(-4T\/fl + l + 0). Obviously, it suffices to show

lim (4,2-(a + 1))- £ (logn)a(loglogn)fc+1

' ¿—I n

■ exp(-4s2 loglog n) = F(b + 2) (2.3)

for proving Proposition 2.1 by the arbitrariness of 0. To this end, by noting that the limit in (2.3) does not depend on any finite terms of the infinite series, we have

r U 2 I 1 \)b+2 ^ (logn)a(loglogn)b+1 ( ^ )

lim 4s2-(a + 1) > -exp -4s2 loglog n

s^va+i/r n

= Um (4s2-(a + 1))b+2 f" (log^)a-4s2(loglogx)b+ ,,

s^Van/2 Jee x

= lim (4s2 - (a + 1)) exp(y(a + 1 - 4s2))yb+1 dy

s\V a+1/2 J1

(by letting y = log log x)

= lim e-uub+1 du (bylettingu = y(4s2-(a + 1)))

s\v/a+1/2 J4s2-(a+1)

= r(b + 2).

The proposition is proved now. □

Proposition 2.2 For any b > -1,

eY 2(b+1)

lim s2(b+1) y p(Y > sT^^):

s\ n ¿—I n loa n v

s\o n log n~ 2b+1(b + 1)'

Proof The proof can be found in Wu and Wen [19]. □

3 Truncation and approximation

In this section, we will use the truncation method and strong approximation method to show that the probability related to R(n) with suitable normalization can be approximated by that for Y. To do this, we first give some notations. Put c = inf{x > 1: l(x) > 0} and

. i 1 l(s) (loglogn)41 nn = in^ s: s > c + 1,^r <-\. (.1)

For each n and 1 < i < n,we let

jXn = Xi/{|Xii<^nl, X*ni = X'ni - eX„i, (3.2)

lsn i = Y!j=i Kj' S*ni = Y!j=i X*nj' Xn = n S*nn, Dl = Ylj=i Var(xn;-)-

It follows easily that

D2n EXn2 ~ nl(vn) - n2(log log n)4.

Furthermore, we denote R*(n) be the truncated R statistic which is defined by the first expression of (1.5) with every Xi being replaced by X^, i = 1,...,n. In addition, for any 0 < j < I, all j > k and k large enough, following the lines of the proof of (2.4) in Pang, Zhang and Wang [21], we easily have

_C_< expMlog k)j) C_1_

l(nk)(logk)j(loglogk)2 < 2l(nk) jexp(c2(logj)j)logj(loglogj)2

< Y"_1_, (3 3)

jkjl(Vj) log j(loglog j)2'

despite a little difference for the definitions of nn, which are from Pang, Zhang and Wang [21] and this paper, respectively. Next, we will give the main result in this section as follows.

Proposition 3.1 For any a < 0, b e R and 1/2 < p < 2, there exists a sequence of positive numbers {pn, n > 1} such that, for any x >0,

P( Y > x + 2/(loglog n)p) - pn

< P(R(n) > xDn)

< P(Y > x - 2/(loglog n)p) + pn,

where pn > 0 satisfies

C (logn)a(loglogn)b

> -pn < c. (.4)

To show this proposition, the following lemmas are useful for the proof.

Lemma 3.1 For any sequence of independent random variables {*„, n > 1} with zero mean and finite variance, there exists a sequence of independent normal variables {Yn, n > 1} with eYn = 0 and EY2 = Ef2 such that, for all q >2 andy > 0,

PI max

i=1 i=1

> y < (Aq)YqJ2 E|fiiq,

whenever E|fi|q < c, i = 1,...,n. Here, A is an universal constant.

Proof See Sakhanenko [22, 23].

Lemma 3.2 Let {W (t); t > 0} be a standard Wiener process. For any e >0 there exists a constant C = C(e) > 0 such that

p( sup sup |W(s + t) - W(s)| > xVh) < Ce-2re

^0<s< 1-h 0<t<h ' h

for every positive x and 0 < h <1.

Proof It is Lemma 1.1.1 of Csorgo and Revesz [24].

Lemma 3.3 For any a < 0, b e R and 1/2 < p <2, there exists a sequence of positive numbers {qn, n > 1} such that, for any x >0,

P(Y > x + 1/(loglogn)p) - qn < P(R*(n) > xDn)

< P(Y > x — 1/(loglogn)p) + qn,

where qn > 0 satisfies

^ (logn)a(loglogn)b } -qn <

' J VI

Proof Let qn = P(|R*(n)/Dn - Y| > 1/(loglogn)p), then obviously, qn satisfies (3.5). For each n, let {Wn(t), t > 0} be a standard Wiener process, then we have {Wn(tD2n)/Dn,t > 0} = {Wn(t), t > 0} and

qn < 2P sup

\0<s<1

j (Xj - Xn) Wn (D) - sDn Wn (1)

2(loglog n)p

< 2P max

Dn \ 4(loglog n)p J

£ (Xj- Kn ) - ( Wn(kD2}) - ^Dn Wn(1))

2pf sup (Wn( — DA - — DnWn(1^ - (Wn(sDn) -sDnWn (1))

0<s<1 n n

4(loglog n)p := In + IIn.

We consider In first. Clearly,

In < 2m max

+ 2P( max

k^ — Dn Wn(1) nn

< 4P max

Ex.- M^n

" 8(loglog n)p I

Dn \ 8(loglog n)p /

- 8(loglog n)p '

It follows from Lemma 3.1 and (3.3) that, for all q >2,

C (logn)a(loglogn)bI

< Cg (logn)a(loglogn)b /(loglogn)p\q£E|X

\ Dn / j=i

C (logn)a(loglogn)b+pq q ,

< C£ 1 g [n^n ) E IX I qH IX I <nn

-C (logn)a(loglogn)b+pqA q , ,

< C£1 g ) £E IX 1 ^^ < 1 X 1 < nk}

< C^E IX I qI{nk-i< IX I < nk}J2

(log n)a(log log n)b+pq (nl(nn))ql2

< CJ2 n!-2EX2l{nk-i< IX I <nk}

(log k)a(loglog k)b+pq

kq/2-l(l(nk ))q/2

~ (logk)a(loglogk)b+pq-2q+4 2J, }

< C > -—--EX I{nk-i < I X I < vk)

CO TO i

< c££7, X, Y1 , ,2 EX21 {nk-i < I X I < nk}

k=i jkjl(j log j(loglog j)2

< C £ ,7(n.) log ,(WW vT,EX2^ nk-i < IXI < nk}

j=/ jl(nj) log j(loglogj)2 k=i

j log j(log log j)2

Next, we treat with IIn. Clearly, one has

IIn < 2P

(sup Wn(—Dn) - Wn(sDn)

\0<s<i \ n / v '

8(loglog n)p /

+ 2P sup

—DnWn(i)- sDnWn(i) n

8(log log n)p

:= IIn(i)+ IIn (2).

It follows from Lemma 3.2 that

IIn(i) = 2P( sup

= 2P( sup

w„| n^)- wn(s)

Wn[ ^j- Wn(s)

8(loglog n)p

n 8(loglog n)p

< Cn exp I -

i92(loglog n)2p )'

which obviously leads to ^ (logn)a(loglogn)b

> -IIn(1) < to. (.10)

On the other hand,

IIn(2) = 2P( sup — -s -|Wn(1)|> —-1--)

\0<s<1 n 8(loglog n)p J

< 2p(|Wn(1)| > 8(loglogn)p

C(loglog n)p

< — -exp

V (1 + o(l))l(nn) \ 128(1 + o(l))/(n„)(loglog n)2p which also obviously leads to

~ (logn)"(loglogn)bim< (I

Equations (3.7)-(3.11) yield (3.6). The proposition is proved now. □

Lemma 3.4 For any a <0 and b e R, one has

£>gn)a(loglogn)bP(|X| > nn) < œ.

Proof It follows from (3.3) that

£(logn)a(loglogn)bP(|X| > nn)

< C^(logn)a(loglogn)b£ p(nk < |X| < nk+1)

œ 1 k

< C^ -2 EX2I{ nk < |X| < nk+^(log n)a(loglog n)b

k=1 nk n=1

œ (logk)a(loglogk)b+4 2r, }

< O -r^—-EX I! nk < |X| < nk+1}

l(nk )

œ œ 1

< ex2i {nk < |X | < nk+1} j=kJl(nj) log ;(loglog j)2

j log .(loglog j)2 □

Lemma 3.5 LetX be a random variable. Then the following statements are equivalent:

(a) X is in the domain of attraction of the normal law,

(b) x2P(|X| > x) = o(l(x)),

(c) xE(\X\I{\X\ >x}) = o(l(x)),

(d) E(\X\nI{\X\ < x}) = o(xn-2l(x)) for n >2.

Proof It is Lemma 1 in Csörgo, Szyszkowicz and Wang [25]. □

Lemma 3.6 For any a < n and b e R, one has, for S(n) = 1/(loglog n ■ log log log n),

y (logn)a(l°glogn)b P(|52(n) - M > 5(n)l(nn)) < to.

Proof It is easy to see that, for large n,

P(|S2(n)-l(nn)|> S(n)l(nn)) <P

1 yX2- l(nn) n

8(n)l(Vn)/2J + P(Xn2 > 5(n)l(nn)/2)

< P ^X2 > (1 + 5(n)/2)nl(nn^ + P ^yyX2 < (1 - 5(n)/2)nl(nn) j

+ nP(\X\ > nn) + P^yX'ni > nV5(n)l(nn)/2^

< P^y^Xn2 > (1 + 5(n)/2)nl(nn^ + ^i^Xn2 < (1 - 5(n)/2)nl(nn)j

+ 2nP(\X\ > nn) + P| ^^Xn > n V5(n)l(nn)/2l, (3.12)

e £ Xn,

< nE\X\I{\X\ > nn} = o(nl(nn)/nn) = o(nV5(n)l(nn))

by Lemma 3.5. Applying Lemma 3.4, we only need to show

ETO=1 (logn)a(ln°gl0gn) PCn=1 Xn2 > (1 + 5(n)/2)nl(nn)) < to,

Er!! (logn)a(ln°gl0gn)b P(En=1 Xn2 < (1 -5(n)/2)nl(nn)) < to, (3.13)

ETO=1(logn)a(ln0gl0gn)b P(En=1 Ki > n^(n7(n)/2) < to

for proving Lemma 3.6. Consider the first part of (3.13) first. By employing Lemma 3.5 and Bernstein's inequality (cf. Lin and Bai [26]), we have for any fixed v >1

y (logn)a(loglogn)b P/^^Xn2 > (1 + 5(n)/2)nl(nn))

n=1 \ l=1 /

= y (logn)a(ln0gl0gn)b J- nl(nn) > 5(n)nl(nn)/2)

n=1 i=1

^ (logn)a(loglogn)b ( 52(K)K2/2(n„)/4

< CL-n-ex?

n=1 n 2(nEX4I{|X | < nnl + 5(n)nMnn)/2),

C (logn)a(loglogn)b ( S2(n)n2i2(nn)/4

< C } -exp--——————

n \ o(1) • vinl(nn)

< c £ (logn)a(loglogn)b exp(-vloglogn)

< c. (.14)

The second part of (3.13) can be proved by similar arguments. Now, let us consider the third part of (3.13). It follows from Markov's inequality that

£ "°g ")b p^x- > ,,Mmn)n)

n=1 \ i=1 /

^ (logn)a(loglogn)b+1 nl(nn)

< C > -^ —- < C.

n n2S(n)l(nn)

The proof is completed now. □

Lemma 3.7 Define An = | R*(n) - R(n) | . Then for any a <0 and b e R, one has

£ (logn)a(loglogn)b /^ > D,

n=1 x (loglogn)2

Proof Firstly, notice that R(n) statistic has an equivalent expression

R(n) = max

1<i<j<n

j -i Sj— Si Sn n

(3.15)

and so does R*(n) with Xi being replaced by X*i in (3.15), i = 1,..., n. That is,

R*(n) = max

1<i<j<n

s*,- sni - j— sn j - ( esnj - es'ni - jn es*

Let fin = 2nE\X| I{|X | > nn}, then

< fin.

1<i<j<n

eS'nj - esn i - Jn eS'n

Setting

L = n : fin <

(loglogn)2 J' then it is easily seen that, for n e L,

r D 1 n

I4*£ (¡ogD^Ic j=J{X = x*>,

since Dn ~ nn(loglogn)2. Hence, it follows from Lemma 3.4 that

£ (log n)a(loglog n)b / > Dn

néC \ (loglogn)2

<J2(logn)a(loglogn)bP(\X\ > n*) < TO.

When né L, applying (3.3) yields

y^ (log n)a(loglog n)b / >_D

n V (loglog n)

^ y^ (logn)a(loglogn)b

^ \ - (log n)a(loglog n)b fin (log log n)2

TO (logn)a(loglogn)b+4 E|y| , ,

< C £-^-E|X|I1|X| > nn }

n=1 Vnl(Vn)

TO (logn)a(loglogn)b+4 ~ , .

< C^-/1?^)-Z^EIX|I{nk < |X| < nk+1]

n=1 V ( 'n) k=n

Vk(log k)a(loglog k)b+4 EX21{nk < |X| < nk+1}

k=1 V® nk

TO (logk)a(loglogk)b+6 2rl ,

< C > -—--EX I{% < |X| < nk+1)

< C y^-;-rr < TO.

J=1 j log j(log log j)2 □

Now, we turn to the proof of Proposition 3.1. Proof of Proposition 3.1 Applying Lemma 3.3, one easily has P(R(n) > xD*)

< P( R(n) > xD*, A* < Dn \ + p(a* > D

(loglog n)2 J \ (loglog n)2 Dn

< P R*(n) > xD* - , " , + H a* >

(loglog n)V V (loglog n) < P( r > x - / / \ + q* + p(a* >■ D*

(loglog n)2 (loglog n)p / \ (loglog n)

< PI r > x ---2—-) + q* + P( a* > D*

(loglog n)p / V (loglog n)

Also, one has

P(R(n) > xDn

> P( R(n) > xDn, An < Dn

(loglog n)

> P( R*(n) > xDn + „ D" , V P^An > D

(loglog n)2 J V (loglog n)2 > P( Y > x + ——--—— + ——-1—— ) - in - P( An >■ Dn

(loglog n)2 (loglog n)P J \ (loglog n)2

> P( Y > x + --2—-)- qn - P( An > Dn

(loglog n)p) \ (loglog n)2

Letting pn = qn + P( An > Dn/(loglog n)2) completes the proof by Lemmas 3.3 and 3.7. □

4 ProofsofTheorems1.1 and 1.2

Proof of Theorem 1.1 For any 0 < S < V a + 1/4 and V a + 1/2 - 5 < e < V a + 1/2 + 5, we have

P(Y > (e + can(e)y 2 loglog n) - pn - P(|S2(n) - l(nn)| > S(n)l(nn)) = P(Y > (e + a'n(e)y 2 loglog n + 2/(loglog n)p) - pn

- P(|52(n)-l(nn)|> S(n)l(nn))

< P(R(n) > (e + a'n(e)y2loglognDn) - P(|S2(n) - l(nn)| > S(n)l(nn))

< P(R(n) > (e + an(e))^2(1 + S(n))nl(nn)loglogn)

- P(|S2(n) - l(nn)| > S(n)l(nn))

< P(Q(n) > (e + an(e)y2nloglogn)

< P(R(n) > (e + an(e))^2(1-S(n))nl(nn)loglogn) + P(|52(n)-l(nn)|> S(n)l(nn))

< P(R(n) > (e + a'^(e)y 2 loglog nDn) + P(|S2(n)-l(nn) | > S(n)l(nn))

< P(Y > (e + afm(e)y2loglogn - 2/(loglogn)p) + pn + P(|S2(n)-l(nn)|> S(n)l(nn))

= P(Y > (e + a'n"(e)y2loglogn) + pn + P(|S2(n)-l(nn)| > S(n)l(nn)), (4.1)

an(s) = UD^ (s + an(s))U^5M - s,

a'h(s) = 4^ (s + an(s))U^^-s + ,

(s) = + an(s))U1-M- s,

,(s) = ^(s + an(s))U^^- s - (loglo^^p+1/2.

Noting that nl(n„) > D2n ~ nl(nn), one easily has ^ ^nlD)n) ^ + an (e))^i ± s(n) - e^ log log n

= ^TiS^Ue)loglogn + (^-1eloglogn (4-2)

and for large n,

(^MyiiSM -1 eloglogn

< |(V 1 ±2S(n)-l)eloglogn|

< 2eS(n) log logn

= 2e/log log log n, (4.3)

which tends to zero as n ^to and e \ Va + 1/2. Hence, we have

a!'n(e) log log n ^ t and a„'{e) log log n ^ t as n ^to and e \ V a + 1/2,

since p > 1/2 and an(e) satisfies (1.8). Now, it follows from Proposition 2.1, (3.4) and Lemma 3.6 that Theorem 1.1 is true. □

Proof of Theorem 1.2 For any 0 < y <1, applying similar arguments to those used in (4.1), we have for large n,

P(Y > eV2(1 + Y) log log n) -pn - P(|52(n) - lfan) | > S(n)lfan))

= P(Y > ej 2(1 + y ) log log n + 2/(loglog n)p) - pn - P(|52(n) - l(nn)| > S(n)lfan))

< p(R(n) > ej 2(1 + y ) log log nDn) - P(|S2(n)-l(nn)| > S(n)lfan))

< P(R(n) > ^2(1 + 8(n))nl(vn) log log n) - P(|S2(n) - l(nn)| > S(n)l(nn))

< P(Q(n) > ej2n log log n)

< P(R(n) > ej2(1-5(n))nl(nn) log log n) + P(|S2(n) - l(nn)| > S(n)lfan))

< P(R(n) > eV 2(1- y ) log log nDn) + P(|52(n)- l(nn)| > S(n)lfan))

< P(Y > ej 2(1- y ) log log n -2/(loglog n)p) + pn + P(|S2(n) - lfan) | > S(n)lfan)) = P(Y > e'V 2(1- Y) log log n) + pn + P(|S2(n) - lfan) | > S(n)lfan)),

i e' = e + —-—^^- ^ e

r e + V+K (log log n)P+1/2 e

|e" = e__v2___

r e Vl=7(loglog n)p+1/2 e

as n ^ to. Hence, Proposition 2.2, (3.4) and Lemma 3.6 guarantee that

EY 2№+1)

(1 + Y)

2b+1(b + 1)

liminf e2(b+1) y (l0gl0g n)b P(Q(n) > e^2n log log; £\0 ^ n log n

n=1 °

< limsup e2(b+1) V (l0gl0g n)b P(Q(n) > ej2n log log n)

\o n=1 n logn

EY 2(b+1)

< (1- Y)b+1

2b+1(b + 1)'

Letting y ^ 0 completes the proof. □

5 Proof of Theorem 1.3

In this section, we first modify the definition in (3.1) as follows: - A ^ l(s) loglognl

r]n = inn s: s > c + 1, — <-\. (.1)

Then one easily has nl(rjn) ~ ) loglogn. Moreover, we define for each n and 1 < i < n,

Xni = X;I{|X;| < )}, Xn = Xni - EXini, [Sni = T!,=1 Xni, S* = T!j=1 Kj, Df = Var(S*n).

Secondly, we give two notations related to the truncated R(n) statistic. That is,

R(n) :=m* {¿(Xn;- n t^) J- 1<k<nn{ £(Knj- n ¿Xn;) J

R* (n) := { y(x:i - n £Xj J - mrn{ Zfe - n t^ J.

Then two lemmas which play key roles in the proof of Theorem 1.3 will be given, after which, we will finish the proof of Theorem 1.3.

Lemma 5.1 Suppose {X,Xn, n > 1} is a sequence of i.i.d. random variables which is in the domain of attraction of the normal law with EX = 0, and l(x) satisfies l(x) < c1 exp(c2(logx)j) for some c1 > 0, c2 > 0 and 0 < j <1. Then, for any b e R and 1/2 <p < 2, there exists a sequence of positive numbers {q'n, n > 1} such that, for any x >0,

P(Y > x + 1/(loglogn)p) - q'n < P(R *(n) > xD*n) < P(Y > x -1/(loglogn)p) + q'n,

where q'n > 0 satisfies

(loglogn)b ,

/ -qn < to.

Proof The essential difference between this lemma and Lemma 3.3 is the different truncation levels are imposed on the random variables {Xn, n > 1} in two lemmas. However, by checking the proof of Lemma 3.3 carefully, one can find that the proof of Lemma 3.3 is not sensitive to the powers of log log n. Hence, one can easily finish the proofby similar arguments to those used in Lemma 3.3. We omit the details here. □

Lemma 5.2 Suppose {X, Xn, n > 1} is a sequence ofi.i.d. random variables which is in the domain of attraction of the normal law with EX = 0, and letf (■) be a real function such that supxeR f (x) | < C and supxeR f' (x) | < C. Then for any b e R, 0 < e <1/4 and l > m > 1, we have

Var(Vl (loglog n)b j ( R*(n) )) = o( (loglogm)2b-1/2) vcll(Z^n=m n log n J (p(n,e))) O( e log m Var(Vl (loglog n)b J ( YH=1 X2ni ) = O( (log log m)2b )

val(Z^n=m nlogn J ((1±y/2)nl(fin)) O( logm „n

Var(V-l (loglog n)b j( S*n ) = O( (loglogm)2b) (5.2)

n=m nlog n J ( ^^YKfin)/2) °^vm(logm)2)'

lvar(En=m^^ En=1 /{IXil > fn}) = o^mT),

where p(n, e) = ej2nl(fn) log log n.

Proof Firstly, we consider the first part of (5.2). For j > i, since R*(i) is independent of

R*"+1,jl;=™(£( X- j I-

It follows that

Covf/(*« V(*«

p(i, e) p(j, e) = Covf/ (*®),/ (RjV / R'+ 1j

p(i, e)J \p(j, e)J V p (j, e)

< CEf I /R'(i + 1j)

P(j, e)J \ P(j, e)

ei e;=1 X' i

eV 2jl( fj) log log j ej 2jl( fj) log log j

(e^j log log j) ^ )

Hence, for any 0 < e < 1/4 and l > m > 1, we have

Varff ^gn)b/(^

n logn Vp(n,e)

< ^^ (log log n)2b , 2 y y (log log i)b (log log j)b ■ / Vi

n2(logn)2 im ilogi jlogj \ejj loglog j

< C (log log m)2b + 0(1) • y (log logj)2b-1/2

m(logm)2 ,=m+1 j(log j)2e

( (log log m)2b-1/2 \ s log m

Consider the second part of (5.2). Similar arguments used in (5.3) leads easily to Cov(/ (t^XL^J ( ^

(1 ± Y/2)U(fji)r V(1 ± Y/2)jl(fjj)

< Cov(f ( Ek=1 Xl \ f ( H=14 \ - f ( EL+1 Xjk \ - (1 ± Y/2)il(rji))' \(1 ± Y/2)jl(jj)/ f V(1 ± Y/2)jl( j)/

< C • t.

It follows that

/ ' (loglogn)bf ( En=1 Xl 4 Vim n log n (1 ± Y/2)nl( jn) ,

< c(loglogm)2b +2 ^ j-1 (loglogi)b (loglogj)b i _ m(log m)2 ^ ^ i log i ilog i i

j=m+1 i=m 7 7 7

= / (loglog m)2b log m

Consider the third part of (5.2). The similar arguments used in (5.3) also lead easily to

S* \ ( S* \\ J v

vAi/wÎk)^/fKj^YW^n)) OV i

which implies that

/ £ (lo^ /

I ' J VI 1r\fT VI \

n log n W Y l( jn)/2

< C(loglogm)2b +2 ^ y (loglogi)b (loglogj)b ^ 0( l _ m(log m)2 ^ ^ i log i ilog i V i

i=m+1 i=m ' ' N 7

= / (loglog m)2b \ \v/m(log m)2/

Finally, we turn to handling the fourth part of (5.2). By employing Lemma 3.5 one has

.. (s^ (loglogn)b A ( _

M -ï-l^n IXil > v

n log n

\n=m ^ i=1

< c£ (3°Hog^ • nP(xi > Vn) + 2 £ giM^

n2(log n)2 . I log I 1 log 1

n=m i=m+1 i=m

/ i j \ IXk | > fi} I{ IXk I > fj}

■ Cov

V k=1 k=1

' (loglogn)2b+1 o ^^ (log log i)b (loglogj)b

<o(1) ■ l n2(logn))2 +2 m £-i-—Tiogj— ■iP(|X| >j

n=m v ° ' j=m+1 i=m ° to/

= ^ (loglogm)2b+1 + C £ (loglogj)2b+1 = O((loglogm)2b+1 m(log m)2 j(log j)2 \ log m

The proofis completed. □

Proof of Theorem 1.3 At the beginning of the proof, we first give an upper bound and a lower bound for the indicator function of R/S statistic. For any x > k+Jn log log n with X > 0, 0 < y <1/2 and large n, one has the following fact:

^ > xl < ^Tra > xi+1 ||s2(")-'( w|> Yl(

< > xi +1 f")

+ > (1 + Y/2)"l(/fn) J + l{ XX < (1 - y/2)"l(fn)

+1{|Sn | > "VYl(fn)/2}

< i T&m >x+|yiX"> f"|

+ i| ¿X2 > (1 + Y/2)"l(/fn)} + I j¿X"i < (1 - Y/2)nl(/fn) j

+1 {|S'"|> "ty l( fn)/2}

< 4 , R (n) - >4 + 31H JiXii > /fn

- 17(1-2y)l(fn) > I >u 1 il fn

+ i|XX > (1 + Y/2)"l(fn)J + ij XX < (1 - Y/2)"l(»fn) J

+1 {|S'"|> "V y l( fn)/2}, (.4)

since one easily has

|ER(n)1 = o(^nl(/fn) log log n).

Also, one has, for any x > XVn log log n with X > 0, 0 < y <1/2 and large n,

i {xi>

ni) > xj > ^tot > ^"S2(")-l( ) > y l( ^

> i{ , R'(n) > xi -3l( J IXil > /fn)

- Iv(1 + 2y)l(fn)- J 15:1 '"J

-1 j£X2i > (1 + Y/2)nl( Vn) j - l|£*2 < (1 - Y/2)nl( Vn)

-1{|5nn|> n^Yl(Vn)/2}.

Denote K(e) = exp(exp(1/(e2M))) for any 0 < e < 1/4 and fixed M >0. Let {f (•), i = 1,..., 5} be real functions such that supx f'(x) | < to for i = 1,..., 5 and

I{lxl > VT-2Y} </m(x) :=/i(x) < I{|x| > 1 - 2y}, I{lxl > 1 + y/2} </n,2(x) :=J2(x) < I{|x| > 1 + y/4},

I{|x| < 1 - y/2} </n,3(x) :=J3(x) < I{|x| < 1 - y/4}, (5.5)

I{|x| > 1} </n,4(x) :=J4(x) < I{|x| > 1/2}, I{|x| > 4y} </n,5(x) :=J5(x) < I{|x| > yy/2}.

Define sk = 1/k, k > M. Then it follows from Lemma 5.2 that \b ( m/„\ w /u/An,r\2b-1/2^

Var/ y- (loglogn)b / R*(nn\ /k(k2/M)2b-1/2\ V 2-f n log n J\ P (n, sk)JJ V exp(k2/M) /

. , , n log n

n>B(sk ) 6

which together with the Borel-Cantelli lemma easily yield

J^S*(/(psi) - - 0 a.s. M

as k — to. Similar arguments also yield

v-^ (loglogn)b ¡fi E"=1 \ ^ E"=1 Xli \\ n

Z^n>B(sk) n log n ( ((1±y/2)nl(Vn)) J ((1±y/2)nl(Vn))) — 0 a.s^

v-^ (loglogn)b /(( '^nn ) EJ( ~5nn )) , n „ s

l.n>B(sk1 n log nJ (( njym)/2)> EJ ( n^YW)/2)>> — 0 a.s.,

En>B(sk) (En=11{|Xi| > Vn} - nP(|X| > Vn)) — 0 a.s.

as k — to. Denote j(n,e) = ej2nloglogn. Using the inequality (5.4), one has limsupe2(b+« V (loglogn)bI{Q(n) > e^2nloglogn}

e\0 n>K(e) n log n

< limsupe2-b+1) E {Q(n) > i(n,ek)}

k—>to k 1 tT" > n log n

n>B(ek-1)

< limsupef-) E (loglogn)b fl{ , R*(n) > j(n,ek) P k-1 ^ n log n U V(1-2y)l(ijn)~ j( k)

k—TO

n>B(sk-1)

+ l{ £X2 > (1 + y/2)nl(Vn) J + < (1 - y/2)nl(Vn) J

+ 3ijjj |Xi| > V^ +1{|5nn|> nvyl(nn)/2}j := III + IV + V + VI + VII. (5.8)

We are going to treat the above terms, respectively. In view of (5.5), (5.6), Lemma 5.1 and Proposition 2.2, one has

m^v 2(b+i) v- (log log n)\ ( R*(n) III < limsup,2-1 L „Wm /M

-1 n>B^k-l) n l0g n ^ V®ß (n, Sk)

< lim sup ef+« V (l0gl0g n) E/i( Jfo

k-1 n>Bel_l) n logn V VS ß (n, ek)

< lim sup ef+« V (loglogn) p( *(n) > 1-2,

k-1 n>Be, ,) n log n Vy/(iÜß(n, ek)

< lim sup e2^1 £

2(b+1) ^ (loglog n)

k^œ n log n

P( Y > ek(1 _ 2,y 2 log log n _ 1/(loglog n)p)

eY 2(b+1)

2h+1{b + 1)(1 _ 2, )2(b+1)'

a.s. (5.9)

ek--—--ek as n ^to.

72(1-2y )(loglog n)p+1/2

Applying (5.5), (5.7), and Bernstein's inequality, one has, for any v' > 1,

n^n/^r 2(b+1) (loglogn)b

0 < IV < lim sup e/, ) > -

" " ^to B^ ) " log"

n>B(ek_1)

■ p(iX >(1 + Y/4)"l(f")^

< lim sup e^T £ ^^^g^ ■ p( ¿X2 > (1 + y /4)nl( fn))

< lim sup e^1 y^----—7

" ^to tT "log"(loglogn)1+v'

sup ek_r'2^~T—-77

= 0, a.s. (5.10)

Similarly, one can prove

V = 0, a.s. (5.11)

For the fourth part of (5.8), by similar arguments to those used in (5.9) and Lemma 3.4, we have

0 < VI < 3 lim sup ef^ Y (l°gl°g P(|X | > ^/2)

k^TO - "=1 log"

= 0, a.s.,

and the details are omitted here. As for the fifth part of (5.8), one can easily show that, for any fixed y >0,

О < VII < limsups^1 £ >

--------Psk-1 / , 1

k^œ П l0g П

< С limsup s2(b+1) ° (loglog n)b • nl(Пп)

~ i^oo k-1 n=1 n log n n2l(rjn)

= 0, a.s. (5.12) Hence, it follows from (5.8)-(5.12) that

limsupe2(b+1) £ (l0gl0gП)ЬI{Q(n) > s^2nloglogn} s\ ¿Ks) n log n

EY 2(b+i)

a.s. (5.13)

- 2b+1(b + 1)(1 - 2y)2(b+«' On the other hand,

limsup s2(b+1) У (loglog n) I {Q(n) > s^2n loglog n}

s\ n<KS) n log n

< limsups2(b+1) £

2(b+1) (l°glogn)

s\ n<K(s) n log n

< —U-. (.14)

- Mb+1 v >

By (5.13), (5.14) and the arbitrarinesses of M and y , one has

EY 2(b+1)

limsups2(b+1) £ (logllogn) I{Q(n) > sV2nloglogn} < ^ a.s.

- - ^ n log n 2b+1(b +1)

Similarly, one has

EY2(b+1)

liminfe2(b+« V (loglogn) I{Q(n) > e^2nloglogn} > e\0 n log n _ ,

The proof is completed now. □

Competing interests

The authors declare that they have no competing interests. Authors' contributions

Allauthors contributed equally to the writing of this paper. Allauthors read and approved the finalmanuscript Acknowledgements

Supported by NationalNaturalScience Foundation of China (No. J1210038, No. 11171303 and No. 61272300) and the FundamentalResearch Funds for the Central Universities.

Received: 9 May 2013 Accepted: 7 March 2014 Published: 31 Mar 2014

References

1. Hsu, PL, Robbins, H: Complete convergence and the law of large numbers. Proc. Natl. Acad. Sci. USA 33, 25-31 (1947)

2. Erdos, P: On a theorem of Hsu and Robbins. Ann. Math. Stat. 20, 286-291 (1949)

3. Baum, LE, Katz, M: Convergence rates in the law of large numbers. Trans. Am. Math. Soc. 120, 108-123(1965)

4. Heyde, CC: A supplement to the strong law of large numbers. J. Appl. Probab. 12,173-175 (1975)

5. Chen, R: A remark on the tailprobability of a distribution. J. Multivar. Anal. 8, 328-333 (1978)

6. Spataru, A: Precise asymptotics in Spitzer's law of large numbers. J. Theor. Probab. 12, 811-819 (1999)

7. Gut, A, Spataru, A: Precise asymptotics in the law of the iterated logarithm. Ann. Probab. 28,1870-1883 (2000)

8. Zhang, LX: Precise rates in the law of the iterated logarithm. (Unpublished manuscript). (2006). Available at http://arxiv.org/abs/math.PR/0610519

9. Zhang, LX: Precise asymptotics in Chung's law of the iterated logarithm. Acta Math. Sin. Engl. Ser. 24(4), 631-646 (2008)

10. Zhang, LX, Lin, ZY: Precise rates in the law of the logarithm under minimalconditions. Chinese J. Appl. Probab. Statist. 22(3), 311-320(2006)

11. Zhang, LX: On the rates ofthe other law of the logarithm. (Unpublished manuscript). (2006). Available at http://arxiv.org/abs/math.PR/0610521

12. Hurst, HE: Long-term storage capacity of reservoirs. Trans. Am. Soc. Civ. Eng. 116, 770-808 (1951)

13. Feller, W: The asymptotic distribution of the range of sums of independent random variables. Ann. Math. Stat. 22, 427-432 (1951)

14. Mandelbrot, BB: Limit theorems on the self-normalized range for weakly and strongly dependent processes. Z. Wahrscheinlichkeitstheor. Verw. Geb. 31, 271-285 (1975)

15. Lin, ZY: The law ofthe iterated logarithm for the rescaled R/S statistics without the second moment. Comput. Math. Appl. 47(8-9), 1389-1396 (2004)

16. Lin, ZY: The law of iterated logarithm for R/S statistics. Acta Math. Sci. 25(2), 326-330 (2005)

17. Lin, ZY: Strong laws of R/S statistics with a long-range memory sample. Stat. Sin. 15(3), 819-829 (2005)

18. Lin, ZY, Lee, SC: The law of iterated logarithm of rescaled range statistics for AR(1) model. Acta Math. Sin. Engl. Ser. 22(2), 535-544 (2006)

19. Wu, HM, Wen, JW: Precise rates in the law ofthe iterated logarithm for R/S statistics. Appl. Math. J. Chin. Univ. Ser. B 21 (4), 461-466 (2006)

20. Kennedy, DP: The distribution ofthe maximum Brownian excursion. J. Appl. Probab. 13(2), 371-376 (1976)

21. Pang, TX, Zhang, LX, Wang, JF: Precise asymptotics in the self-normalized law ofthe iterated logarithm. J. Math. Anal. Appl. 340(2), 1249-1262 (2008)

22. Sakhanenko, AI: On estimates ofthe rate of convergence in the invariance principle. In: Borovkov, AA (ed.) Advances in Probab. Theory: Limit Theorems and Related Problems, pp. 124-135. Springer, New York (1984)

23. Sakhanenko, AI: Convergence rate in the invariance principle for non-identically distributed variables with exponentialmoments. In: Borovkov, AA (ed.) Advances in Probab. Theory: Limit Theorems and Related Problems, pp. 2-73. Springer, New York (1985)

24. Csörgo, M, Revesz, P: Strong Approximations in Probability and Statistics. Academic Press, New York (1981)

25. Csörgo, M, Szyszkowicz, B, Wang, QY: Donsker's theorem for self-normalized partialsums processes. Ann. Probab. 31, 1228-1240 (2003)

26. Lin, ZY, Bai, ZD: Probability Inequalities. Science Press, Beijing; Springer, New York (2010)

10.1186/1029-242X-2014-137

Cite this article as: Pang et al.: Precise asymptotics in the law ofthe iterated logarithm for R/S statistic. Journal of Inequalities and Applications 2014, 2014:137

Submit your manuscript to a SpringerOpen journal and benefit from:

► Convenient online submission

► Rigorous peer review

► Immediate publication on acceptance

► Open access: articles freely available online

► High visibility within the field

► Retaining the copyright to your article

Submit your next manuscript at ► springeropen.com