Hindawi Publishing Corporation Journal of Probability and Statistics Volume 2011, Article ID 181409,13 pages doi:10.1155/2011/181409

Research Article

A Limit Theorem for Random Products of Trimmed Sums of i.i.d. Random Variables

Fa-mei Zheng

School of Mathematical Science, Huaiyin Normal University, Huaian 223300, China Correspondence should be addressed to Fa-mei Zheng, 16032@hytc.edu.cn Received 13 May 2011; Revised 25 July 2011; Accepted 11 August 2011 Academic Editor: Man Lai Tang

Copyright © 2011 Fa-mei Zheng. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Let {X, Xf; i > 1} be a sequence of independent and identically distributed positive random variables with a continuous distribution function F, and F has ^medium tail. Denote Sn = s? 1 Xi,Sn(a) = zn=1 XiI(Mn - a <Xi < Mn) and V2 = Ti=iX - X)2, where Mn = maxKKnXi, X = (1/n) ^n=1 Xi, and a > 0 is a fixed constant. Under some suitable conditions, we show that (nS^k(a)/^k)r/Vn 4 exp{/0(W(x)/x)dx} in D[0,1],as n — ot, where Tk(a) = Sk - Sk(a) is the trimmed sum and {W(t); t > 0} is a standard Wiener process.

1. Introduction

Let {Xn; n > 1} be a sequence of random variables and define the partial sum Sn = ^ Xi and Vn = Xn=1(Xi - X)2 for n > 1, where X = 1/nXn^ Xi. In the past years, the asymptotic behaviors of the products of various random variables have been widely studied. Arnold and Villasenor [1] considered sums of records and obtained the following form of the central limit theorem (CLT) for independent and identically distributed (i.i.d.) exponential random variables with the mean equal to one,

sn=1 log Sk - n l°g(n) + n d (11)

-—--as n —. (1.1)

Here and in the sequel, N is a standard normal random variable, and -4 (4, 4) stands for convergence in distribution (in probability, almost surely). Observe that, via the Stirling formula, the relation (1.1) can be equivalently stated as

n C \ 1/^n

nSM (1.2)

k=1 k /

In particular, Rempaia and Wesoiowski [2] removed the condition that the distribution is exponential and showed the asymptotic behavior of products of partial sums holds for any sequence of i.i.d. positive random variables. Namely, they proved the following theorem.

Theorem A. Let {Xn; n > 1} be a sequence of i.i.d. positive square integrable random variables with EXi = Var X1 = o2 > 0 and the coefficient of variation y = of Then, one has

^n c \ 1fY

ü^) as n . (1.3)

n!¡n ' v ;

Recently, the above result was extended by Qi [3],who showed that whenever {Xn; n > 1} is in the domain of attraction of a stable law L with index a e (1,2], there exists a numerical sequence An (for a = 2, it can be taken as o^Jñ) such that

M_k. \ d, e(r(a+1)1/a)L

g(i(a+ir )L, (1.4)

as n ^ œ, where r(a + 1) = J0° xae~xdx. Furthermore, Zhang and Huang [4] extended Theorem A to the invariance principle.

In this paper, we aim to study the weak invariance principle for self-normalized products of trimmed sums of i.i.d. sequences. Before stating our main results, we need to introduce some necessary notions. Let (X,Xn; n > 1} be a sequence of i.i.d. random variables with a continuous distribution function F. Assume that the right extremity of F satisfies

= sup{x : F(x) < 1} = œ, (1.5)

and the limiting tail quotient

lim F(x + fl), (1.6)

x F(x)

exists, where F(x) = 1 - F(x). Then, the above limit is e~ca for some c e [0, œ), and F or X is said to have a thick tail if c = 0, a medium tail if 0 < c < œ, and a thin tail if c = œ. Denote Mn = maxi<j<nXj. For a fixed constant a > 0, we say Xj is a near-maximum if and only if Xj e (Mn - a, Mn], and the number of near-maxima is

Kn(a) := Card{j < n; Xj e (Mn - a,Mn]}- (1.7)

These concepts were first introduced by Pakes and Steutel [5], and their limit properties have been widely studied by Pakes and Steutel [5], Pakes and Li [6], Li [7], Pakes [8], and Hu and Su [9]. Now, set

Sn(fl) := £XtI{Mn - a <Xi < Mn}, ¿=1

f1, w e A,

I {A} = \

[0, w e A, (1.9)

Tn(a) := Sn - Sn(a),

which are the sum of near-maxima and the trimmed sum, respectively. From Remark 1 of Hu and Su [9], we have that if F has a medium tail and EX / 0, then Tn(a)/n -4 EX, which implies that with probability one Card{k : Tk(a) = 0,k > 1} is finite at most. Thus, we can redefine Tk(a) = 1 if Tk(a) = 0.

2. Main Result

Now we are ready to state our main results.

Theorem 2.1. Let {X, Xn; n > 1} be a sequence of positive i.i.d. random variables with a continuous distribution function F, and EX = f, Var X = a2. Assume that F has a medium tail. Then, one has

" -4 expjdxj in D[0,1], as n -4 a>, (2.1)

where {W(t); t > 0} is a standard Wiener process.

In particular, when we take t = 1, it yields the following corollary.

Corollary 2.2. Under the assumptions of Theorem 2.1, one has

n . . \ f/Vn

n^) eV2N, (2.2)

as n 4 <x>, where N is a standard normal random variable.

Remark 2.3. Since J01(W(x)/x)dx is a normal random variable with

eT WW dx = C EWxdx = 0,

0 x 0 x

1 WW dx) . f f EWxl^dxdy . f f1mi"<x^dxdy = 2.

0 x l JJq xy JJq xy

Corollary 2.2 follows from Theorem 2.1 immediately.

3. Proof of Theorem 2.1

In this section, we will give the proof of Theorem 2.1. In the sequel, let C denote a positive constant which may take different values in different appearances and [x] mean the largest integer < x.

Note that via Remark 1 of Hu and Su [9], we have Ck := Tk(a)/uk 1. It follows that for any 6 > 0, there exists a positive integer R such that

p( sup|Ck - 1| >6) <6. (3.1)

\k>R J

Consequently, there exist two sequences 6m j 0(6i = 1/2) and R*m T<x> such that

PI sup |Ck - 1| >6m)<6m. (3.2)

'( sup |Ck - 1| >6m)

\k>R'm /

The strong law of large numbers also implies that there exists a sequence R'm T<x> such that

a.s. 1

sup |Ck - 1| < —. (3.3)

k>R'm m

Here and in the sequel, we take Rm = max{Rm, R'm}, and it yields

P sup |Ck - 1| >

\k>Rm /

a.s. 1

sup|Ck - 1| < —.

k>Rm m

Then, it leads to

p(v-1 log(Ck) < x) = p(v-log(Ck) < x, sup |Ck - 1| > 6m)

\Vnk=1 / \V"k=1 k>Rm /

/ u [ni] \ (3.5)

+ P\ uE log(Ck) < x, sup|Ck - 1|< 6

\ vn k=1 k>Rm

= : Amn + B

m,n ' ^m.nz

and Am,n < Sm. By using the expansion of the logarithm log(1 + x) = x - x2/2(1 + Ox)2, where O e (0,1) depends on |x| < 1, we have that

BmA = p( T- £ log(Ck) < x, sup |Ck - 1| < 6m)

\ Vn k=1 k>Rm /

/ T RmA(M-1) t \

= P( T E log(Ck) + u E log(1 + Ck - 1) < x, sup |Ck - 1|< Sm)

\Vn k=1 Vnk=(RmA([nt]-1))+1 k>Rm J

^ u RmA([nt]-1) u [nt]

\Vn k=1 Vnk=(RmA([nt]-1))+1

= P( T E log(Ck) + T E (Ck-1)

u Z (Ck -1)2 ^ ir V 2j ^-;-2 < x, sup |Ck - 1| < Sm

Vn k=(Rm A([nt]-1)) + 1 2(1 + Ok(Ck - 1)) k>R

(u RmA([nt]-1) u [nt]

T Z log(Ck) + T E (Ck -1)

Vn k=1 Vnk=(RmA([nt]-1))+1

T (Ck -1)2 T ( IC _ S \ \

-F E 2(1 +fl (r—D?M sup|Ck-1|<M <x)

Vn k=(RmA([nt]-1))+1 2(1 + Ok(Ck - 1)) \k>Rm / J

( URm A(N]-1) U [n] \

- P( T E log(Ck) + T ^ (Ck - 1) < x, sup |Ck - 1| >6m)

\Vn k=1 Vnk=(RmA([nt]-1))+1 k>Rm /

= : Dm,n - Em,n,

where Ok (k = 1,..., [nt]) are (0-1)-valued and Em,n < Sm. Also, we can rewrite Dmn as

( u RmA([nt]-1) u [nt]

Dmn = P[ T £ (log(Ck) - Ck + 1) + (Ck - 1)

\Vn k=1 Vn k=1

[ q 2 \ (3.7)

- T Z 2(1 +(Ck(-C1)21))2 l( sup |Ck - 1|< Sm) < x).

Vn k=(RmA([nt]-1))+12(1 + Ok(Ck - 1)) \k>Rm / /

Observe that, for any fixed m, it is easy to obtain

T RmA([nt] -1)

T E (log(Ck) - Ck + 1) -4 0 as n -4», (3.8)

Vn k=1

by noting that V2 to.

And if Rm > [nt] - 1, then we have

u (C[nt] -1)2 <C -U 0, (3.9)

Vn 2(1 + (C[nt] -1) e{nt])2 Vn

as n u to. If Rm< [nt] - 1, then Rm + 1 < [nt]. Denote

[nt] <r. 1 \2

Fm,n :=( VUn 1 2)l( sup|Ck - 1|< 6m), (3.10)

\ Vn k=Rm + 1 2(1 + Ok(Ck - 1)) / \k>Rm /

and, by observing that x2/(1 + Ox)2 < 4x2, then we can obtain

F.....< £g(Ck -1)2 = C §(^ _ 1

Vn k=Rm + 1 Vn k=Rm + 1^ Uk

C 1 (St - o2+C 1 m2 (M1)

Vnk=tm+A uk / vnk=Rm+A uk /

= : Hm,n + Lm,n.

For any £> 0, by the Markov's inequality, we have

p/1 (% -1)2 >A <4^ E/§( % -^

V -nk=Rm+Auk ) ) < ^ Vk=Rm+Auk ))

JL y VarfSA = Ca2 111 1-U0

Then, Hmn u 0. To obtain this result, we need the following fact:

2 r=1 (X - X)

v,i as. 2

n " ^ En=1 (X - U

Indeed,

zh (xt - x) 2 xn=1 (X - u)2 - n(u - x) 2

En=1 (xt - u)2 En=1 (xt - u)2

(u - X)2 = 1- v 7

(En=1 (X - u) 0 n

(3.12)

"IT ™ a2, -I A,-TT ^ 1, as n -u to. (3.13)

Now, we choose two constants N > 0 and 0 < 6 < 1 such that P(|X - u| > N) <6. Hence, in view of the strong law of large numbers, we have for n large enough

(u - X)2 (u - X) '

(zn=1 (Xt -u)2)/n~ (zn=1 (Xt -u)I|xt -ul > ^/n

(u - X)2 (3.15)

"N^ n=1 K |Xi - ul >N))/n

= _o(1)_ ,

N2(P(|X - u1 > N) + o(1))

which together with (3.14) implies that

En=1 (x - X)2 v^2 as

-^-V =-^-2 -U1, (3.16)

sn=1 (xt - u)2 En=1 (X - u)2

as n u to. Furthermore, in view of the strong law of large numbers again, we obtain

V2 = S^xJx2 .ZMX-ur; (3.17)

r=1 X - u)1

as n u to, where a2 = Var(X) > 0. For Lmn, by noting that Sn(a)/Sn U 0, as n u to (see Hu and Su [9]), thus we can easily get

Sn(a) = Sn(a) _ Sn a.^ 0 (3 18)

n Sn n , '

as n u to. Then, for any 0 < 6' < 1, there exists a positive integer R' such that

p(sup> 6') <6'. (3.19)

\k>R' k J

Consequently, coupled with (3.18), we have

P(L......> 6) < p( Vn 1( S-f- )2 > * S ¥ < «') + Pfe ^ > 6')

< p( Sf1 >*) +

(3.20)

Clearly, to show Lmn 4 0, as n 4 it is sufficient to prove

VrZ^P -4 0. (3.21)

Vnk=1 k

Indeed, combined with (3.17), we only need to show

1 _4 0. (3.22) v?^ k

As a matter of fact, by the definitions of Sn(a) and Kn(a), we have

(Mn - a)Kn(a) < Sn(a) < MnKn(a). (3.23) In view of the fact Mn T »(a.s.), we can get from Hu and Su [9] that

Sn(a) a's- Kn(a), (3.24)

and thus it suffices to prove

Actually, for all e,6> 0, and N1 large enough, we can have that

P(^_ZKk(a)Mk > \ = P(^_ZKki?) Mk> A A vnk=1 k >y = P\vnk —k ' —k )

1 ZM^O) _4 0. (3.25)

1 ZKk (a) c Mk A J Mk^^x

< PI ^ > —^ ■ S> e, sup —= < S J + P ( sup —= > S ).

\Vnk=i Vk k>N^Vk Vk>N1vk 1

(3.26)

Observe that if F has a medium tail, then we have Mn / sfn = (Mn / log n)(log n/^Jn) —5. 0by noting that Mn/logn 4 1/c [9], where c is the limit defined in Section 1. Thus it follows

P( supMk > S) -4 0, (3.27)

k>N1 k

as N1 u to. Further, by the Markov's inequality and the bounded property of EKk(a) from Hu and Su [9] , we have

J 1 lKk (a) M, A.D( 6 lKk(a) P I -1=~ ■ 6> £, sup —= <6 < PI — \-— > £

< C_—_¿0—^ (3.28)

£vwk=1 vk

< c-—=1> < c6,

£vnt={vt £

and, hence, the proof of (3.22) is terminated. Thus Lmn u 0 follows. Finally, in order to complete the proof, it is sufficient to show that

u 1 d (lW (x)

U L"tJ A r

Ynt := f£C - 1) -M x

V«fc=l Jo x

dx, (3.29)

and, coupled with (3.21), we only need to prove

Vn1(l-1) ^I®- (3.30)

He(f) (t) =

Yn,e (t) = <

g Sk - uk

Vnk=[ne]+1 k

t > e , 0 < t < e,

t > e, 0 < t<e.

(3.31)

It is obvious that

dx - He(W)(t)

0, as e 0. (3.32)

Note that

1 [n]|Sk - ukI 1 n]|Sk - ukI

max|Yn(t) - Yne(t)| = max—V1 \ Ul < — V 1 \ u 1,

0<t<e nW n'eWI 0<KiV^ k ~ Vn^ k

(3.33)

and then, for any e1 > 0, by the Cauchy-Schwarz inequality and (3.17), it follows that

/ \ / 1 [ne]

lim limsup P( max|Yn(t) - Yne(t)| > e1 ) < lim limsup Pi — V-

e 4 0 n 4» \0<t<e J e 4 0 n 4» \Vnk=

1 [ |Sk - Tk|

C Z]E|Sk -Tk|

< lim lim sup —= > ---

n 4» v n k=1

< lim lim sup

n 4» v n k=1

Z] 1 /V / Sk - Tk

—1\ VH IT

C [ ne] 1 = lim lim sup -c V —=

e40 n4» Vnt^^^k

< lim lim sup -C\¡[ne].

e4o n4» VSV

(3.34)

Furthermore, we can obtain

e t 1 Vn

[nt] Z

k= [ne]+1

Sk - Tk ( S[x] - [x]t

< sup V"

e<t<1 Vn

[nt]+1 s[x] - [x]t

[ne]+1

[ne]+1 s[x] - [x]t

dx - | —[x]—[-^T dx

e<t<1 Vn

e<t<1 Vn

f [nt]+1 / 1 1 v

(S[x] - Mt)^ - TxT )dx

J[ne]+1 \x [x]/

[nt]+1 S[x] - [x]T

maxk<n|Sk - Tk1 / 2 2 1 <-77-- sup ( — + — + —

Vn e<t<1 ne nt ne

^ Cmaxk<^ Sk - Tk| < ^max^nE^Xi - t|

c xn=11xi - t|

(3.35)

Therefore, uniformly for t e [e, 1], we have

1 [[ Sk - Tk 1 (nt S[x] - [x]^ (1) f

- Z — = Vn lne^^dx + OP(1)=i

' k=[ne]+1

t Wn(t)

dx + Op (1),

(3.36)

where W„(t) := (S[nt] - [nt]p)/V„. Notice that He(-) is a continuous mapping on the space D[0,1]. Thus, using the continuous mapping theorem (c.f., Theorem 2.7 of Billingsley [10]), it follows that

Y„,e(t) = H(W„) (t) + op(1) He(W)(t), in D[0,1], as n to. (3.37)

Hence, (3.32), (3.34), and (3.37) coupled with Theorem 3.2 of Billingsley [10] lead to (3.30). The proof is now completed.

4. Application to ^-Statistics

A useful notion of a U-statistic has been introduced by Hoeffding [11]. Let a U-statistic be defined as

Un = (n) X h(X,i.....Xm ),

\mJ 1<ii<-<im<n

where h is a symmetric real function of m arguments and {Xi; i > 1} is a sequence of i.i.d. random variables. If we take m = 1 and h(x) = x, then Un reduces to Sn/n. Assume that Eh(X1,..., Xm)2 < to, and let

h1 (x) = Eh(x, X2,..., Xm),

Un = mY,(h1(Xi) - Eh) + Eh.

Thus, we may write

Un = U n + Rn,

E H(Xi1.....Xim),

\m/ 1<i1<... <im<n

H(x1,. ..,xm) = h(x1,. ..,xm) - E(h1(xi) - Eh) - Eh.

It is well known (cf. Resnick [12]) that

Cov( Un,Rn) = 0, 1 \

Rn —^ 0, as n

Theorem 2.1 now is extended to U-statistics as follows.

Theorem 4.1. Let Un be a U-statistic defined as above. Assume that eh2 < » and p(h(X1, ...,Xm) > 0) = 1. Denote u = Eh > 0 and a2 = Var (h^X^) = 0. Then,

n Uk Y/mVn d if-W(x)_ 1 [01] (46)

I n 1 —> exp^ ——dx\, in D[0,1], as n —> », (4.6)

k=mT( mn ) 0 x

where W(x) is a standard wiener process, and Vn = Xn=1 (Xi - X)2.

In order to prove this theorem, by (3.17), we only need to prove

,[fV-A _4af

mVnk=m T(

) / J0 x

— Uk A d..aítW(-) dx, in D[0,1], as n -4». (4.7)

If this result is true, then with the fact that (m) Un 4 Eh = u deduced from E|h| < » (see Resnick [12]) and (4.3), Theorem 4.1 follows immediately from the method used in the proof of Theorem 2.1 with Sk/k replaced by (m)-1Un. Now, we begin to show (4.7). By (4.3), we have

-L-Z(_U^ A = Z(_U^ A + ZJ^ (48) mVn^S^m) ) mVnkem\T(m) ) m). 1 . ;

By applying (3.30) to random variables mh1(Xi) for i > 1, we have

_ A = uh1(xi) A _zYh1(xi) 1 m—nim\T(m) y = —^Vk=1V uk ) kk\ uk

-AafWM dx,

in D[0,1],asn 4 », since the second expression converges to zero a.s. as n 4 ».Therefore, for proving (4.7), we only need to prove

T [nt]

m—= -4 0, as n -4», (4.10)

mvnk=mT( m)

and it is sufficient to demonstrate

:=-—= jt-Rnr -4 0, as n -4». (4.11)

Indeed, we can easily obtain eR? 4 0 as n 4 » from Hoeffding [11]. Thus, we complete the proof of (4.7), and, hence, Theorem 3.1 holds.

Acknowledgment

The author thanks the referees for valuable comments that have led to improvements in this

References

[1] B. C. Arnold and J. A. Villasenor, "The asymptotic distributions of sums of records," Extremes, vol. 1, no. 3, pp. 351-363,1999.

[2] G. Rempala and J. Wesolowski, "Asymptotics for products of sums and ^-statistics," Electronic Communications in Probability, vol. 7, pp. 47-54, 2002.

[3] Y. Qi, "Limit distributions for products of sums," Statistics & Probability Letters, vol. 62, no. 1, pp. 93-100, 2003.

[4] L.-X. Zhang and W. Huang, "A note on the invariance principle of the product of sums of random variables," Electronic Communications in Probability, vol. 12, pp. 51-56, 2007.

[5] A. G. Pakes and F. W. Steutel, "On the number of records near the maximum," The Australian Journal of Statistics, vol. 39, no. 2, pp. 179-192,1997.

[6] A. G. Pakes and Y. Li, "Limit laws for the number of near maxima via the Poisson approximation," Statistics & Probability Letters, vol. 40, no. 4, pp. 395-401,1998.

[7] Y. Li, "A note on the number of records near the maximum," Statistics & Probability Letters, vol. 43, no. 2, pp. 153-158,1999.

[8] A. G. Pakes, "The number and sum of near-maxima for thin-tailed populations," Advances in Applied Probability, vol. 32, no. 4, pp. 1100-1116, 2000.

[9] Z. Hu and C. Su, "Limit theorems for the number and sum of near-maxima for medium tails," Statistics & Probability Letters, vol. 63, no. 3, pp. 229-237, 2003.

[10] P. Billingsley, Convergence of Probability Measures, Wiley Series in Probability and Statistics: Probability and Statistics, John Wiley & Sons Inc., New York, NY, USA, 2nd edition, 1999.

[11] W. Hoeffding, "A class of statistics with asymptotically normal distribution," Annals of Mathematical Statistics, vol. 19, pp. 293-325,1948.

[12] S. I. Resnick, "Limit laws for record values," Stochastic Processes and their Applications, vol. 1, pp. 67-82, 1973.

Copyright of Journal of Probability & Statistics is the property of Hindawi Publishing Corporation and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use.