slln for weighted independent identically distributed random variables

17

Click here to load reader

Upload: john-baxter

Post on 06-Aug-2016

213 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: SLLN for Weighted Independent Identically Distributed Random Variables

165

0894-9840/04/0100-0165/0 © 2004 Plenum Publishing Corporation

Journal of Theoretical Probability, Vol. 17, No. 1, January 2004 (© 2004)

SLLN for Weighted Independent IdenticallyDistributed Random Variables

John Baxter,1 Roger Jones,2 Michael Lin,3 ,5 and James Olsen4

1 School of Mathematics, University of Minnesota, Minneapolis, Minneapolis 55455. E-mail:[email protected] of Mathematics, De Paul University, 2320 N. Kenmore, Chicago, Illinois 60614.E-mail: [email protected] of Mathematics, Ben-Gurion University of the Negev, Beer-Sheva, Israel.E-mail: [email protected] of Mathematics, North Dakota State University, Fargo, North Dakota 58105.E-mail: [email protected] To whom correspondence should be addressed.

Received August 18, 2002; revised March 15, 2003

For any sequence {ak} with sup 1n;nk=1 |ak |

q <. for some q > 1, we prove that1n;n

k=1 akXk converges to 0 a.s. for every {Xn} i.i.d. with E(|X1 |) <. andE(X1)=0; the result is no longer true for q=1, not even for the class of i.i.d.with X1 bounded. We also show that if {ak} is a typical output of a strictly sta-tionary sequence with finite absolute first moment, then for every i.i.d. sequence{Xn} with finite absolute pth moment for some p > 1, 1n;n

k=1 akXk convergesa.s.

KEY WORDS: Law of large numbers; weighted averages; independent randomvariables; Besicovitch sequences.

1. INTRODUCTION

Let {ak} be a sequence of numbers, and {Xk} a sequence of independentidentically distributed (i.i.d.) random variables (with E(|X1 |) <.). Whenan \ 0 for every n, and An=;n

k=1 ak are the partial sums, the a.s. conver-gence of 1An ;

nk=1 akXk is a weighted SLLN (for {Xk}). This convergence

was characterized by Jamison et al.(12) When An/n converges to a nonzerolimit, the weighted SLLN becomes the a.s. convergence of 1n;n

k=1 akXk.This convergence can be studied for any sequence, not necessarily of

Page 2: SLLN for Weighted Independent Identically Distributed Random Variables

nonnegative numbers, and although 1n;nk=1 akXk is not strictly a weighted

average anymore, the term weighted SLLN is still used (followingTempelman,(24) the term ‘‘modulated’’ is used in Lin et al., (16) instead of‘‘weighted’’).For example, this type of problem arises in the study of consistency of

the least square estimator (LSE) for the unknown regression coefficient bin the simple linear regression Yk=bak+Xk, k=1, 2,... . In this model theinput (the regressor sequence) {ak} is known, but the noise sequence {Xk}is nonobservable; only the responses Yk can be observed. The LSE based onthe first n observations is

bn :=;nk=1 akYk

;nj=1 |aj |

2=b+;nk=1 akXk

;nj=1 |aj |

2 .

When lim n1n;n

j=1 |aj |2 exists and is not 0, the problem of strong consis-

tency of the LSE (a.s. convergence of bn to b) is a question of a weightedSLLN (a ‘‘weighted ergodic theorem’’ when the noise is strictly stationary,with absolute first moment).Recall (e.g., Lin et al. (16)) that a sequence of complex numbers {an} is

called q-Besicovitch (1 [ q <.) if it is in the closure of the set of trigono-metric polynomials, in the semi-norm ||{bk}||Wq=[lim sup

1n;n

k=1 |bk |q]1/q.

A 1-Besicovitch sequence is simply called Besicovitch, and everyq-Besicovitch sequence is Besicovitch. It follows from Tempelman(24) thatif {ak} is q-Besicovitch, then

1n;n

k=1 akZk converges a.s. for every strictlystationary {Zn} with E(|Z1 |p) <. where p=q/(q−1); Ryll-Nardzewski(21)

proved that if {ak} is bounded Besicovitch, the above convergence holdsfor p=1. See Lin et al. (16) for additional results and references.When {Yk} is a strictly stationary ergodic sequence with E(|Y1 |q) <.

for some 1 [ q <., the ‘‘return times’’ theorem(20) says that for a.e. y,the sequence {ak} defined by ak=Yk(y) will yield a.s. convergence of1n;n

k=1 akZk for every strictly stationary {Zn} with E(|Z1 |p) <., where

p=q/(q−1) is the dual index (see also Lin et al., (16) where earlier refer-ences are given). Assani(5) showed that if {Yn} are symmetric i.i.d. withE(|Y1 |q) <. for some q > 1, then for a.e. y the sequence {ak} with ak=Yk(y) will yield a.s. convergence of

1n;n

k=1 akZk for every strictly stationarysequence {Zk} with E(|Z1 | r) <. for some r > 1 (even if r < p=q/(q−1)).In this paper we show that if ||{ak}||Wq <. for some q > 1, then

1n;n

k=1 akXk converges to 0 a.s. for every {Xn} i.i.d. with E(|X1 |) <. andE(X1)=0, and that the result fails for q=1. Moreover, for every i.i.d.sequence {Xn}, with X1 nonnegative and not essentially bounded, thereis a 1-Besicovitch sequence {an} such that

1n;n

k=1 akXk diverges almostsurely. We also show that if {Yn} is identically distributed (not necessarily

166 Baxter, Jones, Lin, and Olsen

Page 3: SLLN for Weighted Independent Identically Distributed Random Variables

stationary!) with E(|Y1 |) <., then a.e. realization ak=Yk(y) satisfies1n;n

k=1 akXk Q 0 a.s. for every {Xn} i.i.d. with E(|X1 |p) <. for some p > 1

and E(X1)=0.

2. ON THE MARCINKIEWICZ–ZYGMUND SLLN FORSTATIONARY SEQUENCES

The now classical SLLN for independent identically distributedrandom variables without expectation due to Marcinkiewicz–Zygmund(17)

Theorem 9 (see Theorem 3.2.3(i) in Stout(23)) was generalized in Lemma 3of Sawyer(22) assuming only identical distribution (even without stationarity).

Theorem 2.1. Let {Yn} be a sequence of identically distributedrandom variables, with E(|Y1 |a) <. for a given 0 < a < 1. Then

C.

k=1

|Yk |k1/a<. a.s.

and 1n1/a

;nk=1 Yk Q 0 almost surely.

Remarks. 1. The theorem clearly applies if {Yn} is strictly station-ary. A proof for this case (using ideas of Jones(13)) appears in Petersen,(18)

p. 100.2. Assume that {Yn} in the theorem is strictly stationary and ergodic

without first moment. Then by the ergodic theorem 1n;n

k=1 |Yk |Q.almost surely. Assume E(|Y1 |b)=. for a fixed 1 > b > a, then whenever1n1/b

;nk=1 |Yk | does converge a.s., the limit must be 0 (Aaronson

(1)).

We now show that the Marcinkiewicz–Zygmund SLLN for i.i.d. withfinite absolute pth moment, 1 < p < 2, (e.g., Theorem 3.2.3(ii) in Stout(23))cannot be directly generalized to the case of strictly stationary sequences.

Proposition 2.2. There exists a bounded strictly stationary sequence{Yn} with E(Yn)=0, such that for every p > 1, almost surely

1n1/p

;nk=1 Yk

does not converge.

Proof. It is known that for every h ergodic measure preserving in anonatomic probability space there exists f ¥ L. with > f=0, such that a.s.;nk=1 f p h

k/k does not converge—see p. 94 of Petersen.(18) Put Yk=f p hk,and let Sn=;n

k=1 Yk. Fix p > 1, and denote a=1/p < 1. Summation byparts (with S0=0) yields

Cn

k=1

Ykk=C

n

k=1

Sk−Sk−1k

=Snn+Cn−1

k=1

11k−1k+12 Sk. (2.1)

SLLN for Weighted Independent Identically Distributed Random Variables 167

Page 4: SLLN for Weighted Independent Identically Distributed Random Variables

If Sn/na converges a.s., we also have that the sequence is a.s. pointwisebounded, and the pointwise estimate

Cn−1

k=1

:11k−1k+12 Sk :=C

n−1

k=1

1k(k+1)

|Sk | [ supj \ 1

|Sj |ja

Cn−1

k=1

1k2−a

yields a.s. convergence in (2.1), a contradiction. i

Remark. Some rates of convergence in the SLLN for centered strictlystationary sequences can be obtained from rates of growth of the absolutepth moments of the sums—see Derriennic and Lin.(11)

3. SLLN FOR WEIGHTED I.I.D. SEQUENCES

In this section we study the a.s. convergence of weighted averages ofindependent identically distributed random variables: for an i.i.d. sequence{Xn} and a sequence of real numbers {an} we are interested in the a.s.convergence of 1n;n

k=1 akXk.The following theorem, which is Theorem 2.12.2 in Stout,(23) is due

to Marcinkiewicz and Zygmund(17) (Theorem 5Œ), and is now a specialcase of the result of Chung(8) (the case p=2 is the well-known Khinchine–Kolmogorov theorem).

Theorem 3.1. Let 1 < p [ 2, and let {Xn} be a sequence of indepen-dent random variables with pth absolute moment and E(Xn)=0 for every n.If ;.

n=1 E(|Xn |p) <., then the series ;.

n=1 Xn converges a.s.

Remark. Let p be as above. When {Xn} are independent withsup E(|Xn |p) <. and E(Xn)=0 for every n, the theorem yields a.s.convergence of ;.

n=1Xnna , for any a > 1/p. For p < 2 and {Xn} i.i.d.,

Marcinkiewicz and Zygmund,(17) Theorem 9, proved the convergence of theseries also for a=1/p (and deduced from it their SLLN). This convergenceof the series is usually not explicitly stated by later authors, but comesout of their proof of the Marcinkiewicz–Zygmund SLLN; e.g., seeTheorem 3.2.3 (ii) in Stout(23).

Lemma 3.2. Let {cn} be a sequence of nonnegative real numberswith

supn

c1+c2+·· ·+cnn

=ca <..

168 Baxter, Jones, Lin, and Olsen

Page 5: SLLN for Weighted Independent Identically Distributed Random Variables

Then for every p > 1 and for all m=1, 2,... we have

C.

n=m+1

cnnp

[pp−1

·ca

m p−1.

Proof. The argument is just summation by parts. We have

C.

n=m

cnnp= C

.

n=mcn C

.

k=n

1 1kp−

1(k+1)p2

= C.

k=m

1 1kp−

1(k+1)p2 C

k

n=mcn [ C

.

k=m

pkp+1

Ck

n=mcn

[ C.

k=m

pkp+1

k1k

Ck

n=1cn [ ca C

.

k=m

pkp.

Hence

C.

n=m+1

cnnp

[ ca C.

k=m+1

pkp

[ pca F.

m

dttp=pp−1

·ca

m p−1. i

Remark. The proof of the lemma (stated for realizations of station-ary sequences with finite pth moment) appears in pp. 228–229 of Assani.(3)

Theorem 3.3. Let q > 1 and let {ak} satisfy

supn

|a1 |q+·· ·+|an |q

n=A <.. (3.1)

Then for every independent sequence {Xn} with E(Xn)=0 for every nand sup n E(|Xn | r) <. for some 1 < r <., we have that for every a ¥(1/min{r, q, 2}, 1] the series ;.

k=1akXkka converges a.s., hence

1na ;

nk=1 akXk

Q 0 a.s.

Proof. If r >min{q, 2}, we can replace r by this minimum, so weassume r [ 2 and r [ q. Then (3.1) holds also with q replaced by r. Putcn=|an | r, and fix a > 1/r. Then Lemma 3.2 yields that ;.

n=1|an|

r

nar <..Hence for {Xn} as specified,

C.

n=1E 1 :anXn

na: r2 [ sup

kE(|Xk | r) C

.

n=1

|an | r

nar<..

SLLN for Weighted Independent Identically Distributed Random Variables 169

Page 6: SLLN for Weighted Independent Identically Distributed Random Variables

Since 1 < r [ 2 and E(Xn)=0 for every n, Theorem 3.1 yields the a.s. con-vergence of the series ;.

k=1akXkka . Kronecker’s lemma yields

1na ;

nk=1 akXk

Q 0 a.s. i

Remarks. 1. When {ak} satisfies (3.1), and {Xn} are i.i.d. withE(|X1 | r) <. for some r > 1 and E(X1)=0, we obtain a weighted SLLNwith rate.

2. Let {Yk} be independent with sup n1n;n

k=1 E(|Yk |r) <. for some

1 < r <. and E(Yk)=0 for every k. Defining ak=[E(|Yk | r)]1/r, weput Xk=Yk/ak if ak > 0 and Xk=0 if ak=0. Then Theorem 3.3 yieldsthat whenever a > 1/min{r, 2}, the series ;.

k=1Ykka converges a.s. and

1na ;

nk=1 Yk Q 0 a.s. This yields a rate for the SLLN obtained by Landers

and Rogge,(15) p. 306.

3. Theorem 3.3 shows that if {ak} satisfies (3.1) and {Xn} is inde-pendent with E(Xn)=0 for every n and sup n E(|Xn | r) <. for some1 < r <., then 1n;n

k=1 akXk Q 0 a.s. For r=1 this is no longer true: putan — 1, and take {Xn} independent with sup n E(|Xn |) <. and E(Xn)=0for every n, such that the SLLN fails a.s. (e.g., let {An} be a sequence ofindependent events with Pr(An)=1/(n log n) for n > 2, and define Xn=n1An −1/log n).

4. The novelty in Theorem 3.3 is for 1 < r < 2. For r \ 2 thesequence {Xn} need only be uncorrelated (Theorem 5 of Cohen andLin(9)).

Theorem 3.4. Let q > 1 and let {ak} satisfy (3.1). Then for every i.i.d.sequence {Xn} with E(|X1 |) <. and E(X1)=0 we have

1n;n

k=1 akXk Q 0a.s.

Proof. Since ( 1n;nk=1 |ak |

2)12 [ ( 1n;n

k=1 |ak |q)1q when q > 2, we may

assume q [ 2. We follow closely the usual proof of Kolmogorov’s SLLN!LetWn=Xnq{|Xn| [ n}. Then

|E(Wn)|=|E(Xn)−E(Wn)| [ E(|X1 |q{|X1| > n})Q 0.

Clearly

C.

n=1P(Xn ]Wn)=C

.

n=1P(|Xn | > n)=C

.

n=1P(|X1 | > n) <.,

so P(Xn ]Wn i.o.)=0. Thus, we have to prove only1n;n

k=1 akWk Q 0 a.s.

170 Baxter, Jones, Lin, and Olsen

Page 7: SLLN for Weighted Independent Identically Distributed Random Variables

Let Vn=Wn−E(Wn). Then {Vn} is an independent sequence of meanzero bounded random variables. Assumption (3.1) implies sup n

1n;n

k=1 |ak |[ A1/q. Since E(Wk)Q 0, for E > 0 there is n0 such that for n > n0 weobtain, using |E(Wk)| [ E(|X1 |),

:1n

Cn

k=1akVk−

1n

Cn

k=1akWk :=:

1n

Cn

k=1akE(Wk) :

[ :1n

Cn0

k=1akE(Wk) :+:

1n

Cn

k=n0+1akE(Wk) :

[n0nE(|X1 |) A1/q+EA1/q.

It is therefore enough to show that 1n;n

k=1 akVk Q 0 a.s., and byKronecker’s Lemma it is sufficient to show that ;.

n=1anVnn converges a.s.

By Theorem 3.1, it is sufficient to show that ;.

n=1|an|

q E(|Vn|q)

nq converges.Since ||Vn ||q [ ||Wn ||q+|E(Wn)| [ 2 ||Wn ||q, it is sufficient to prove that;.

n=1|an|

q E(|Wn|q)

nq <.. But putting B=Aq/(q−1) and using Lemma 3.2 forq (with ck=|ak |q), we obtain

C.

k=1

|an |q E(|Wn |q)nq

=C.

n=1

|an |q

nqF.

0qaq−1P{|Wn | > a} da

[ C.

n=1

|an |q

nqFn

0qaq−1P{|X1 | > a} da

=F.

0qaq−1P{|X1 | > a} C

.

n=[a]+1

|an |q

nqda

[ q C.

n=1

|an |q

nq+F

.

1qaq−1P{|X1 | > a}

B[a]q−1

da

[ q(|a1 |q+B)+2qB F.

1P{|X1 | > a} da

[ q(|a1 |q+B)+2qBE(|X1 |).

This proves the theorem. i

Remark. The general result of Theorem 1.1 in Cuzick(10) yields thea.s. convergence of our theorem only for {ak} bounded. When {ak} satis-fies (3.1) with 1 < q <., the result of Cuzick(10) yields 1n;n

k=1 akXk Q 0 a.s.only when E(|X1 |p) <., where p=q/(q−1) (the case p=2 follows fromChow and Lai(7)).

SLLN for Weighted Independent Identically Distributed Random Variables 171

Page 8: SLLN for Weighted Independent Identically Distributed Random Variables

Corollary 3.5. Let {Yn} be a strictly stationary sequence on theprobability space (S, m), with E(|Y1 |q) <. for some 1 < q <.. Then thereis a set SŒ … S with m(S−SŒ)=0, such that for every y ¥ SŒ the sequencedefined by ak=Yk(y) has the following property:For every i.i.d. sequence {Xn} with E(|X1 |) <., the sequence

1n;n

k=1 akXk converges a.s. If {Yn} is an ergodic sequence, then the a.s.limit is E(Y1) E(X1).

Proof. By the ergodic theorem we have that 1n;nk=1 |Yk |

q convergesa.s. and 1n;n

k=1 Yk converges a.s. on (S, m); hence for almost every pointy ¥ S the sequence ak=Yk(y) satisfies the hypothesis (3.1) of the previoustheorem, and l :=lim n

1n;n

k=1 ak exists. Hence

1n

Cn

k=1akXk=

1n

Cn

k=1ak(Xk−EXk)+

1n

Cn

k=1akE(X1)||QnQ. lE(X1).

If {Yn} is ergodic, lim n1n;n

k=1 Yk=E(Y1) a.s. i

Remarks. 1. The point of the argument is that the realization of{Yn} is chosen in advance, and is then a ‘‘good weight sequence’’ for alli.i.d. sequences with finite absolute first moment.

2. The corollary is originally due to Assani,(3) who used Lemma 3.2to obtain a maximal inequality, from which he deduced the condition ofJamison et al. (12)

3. In fact, one can considerably weaken the strict stationarityhypothesis: Let T be a positive linear contraction of Lq(S, m), 1 < q <.,and let Yn=TnY0 for some Y0 ¥ Lq(m). Then the conclusion of Corollary 3.5holds. By the ergodic theorem of Akcoglu(2) (see also p. 190 of Krengel(14)),1n;n

k=1 Yk converges a.s. on (S, m); for 1 < r < q we have sup n1n;n

k=1 |Yk |r [

sup n1n;n

k=1 (Tk |Y0 |) r <. (see the proof of Theorem 3.10 in Lin et al. (16)).

Hence the proof of the corollary applies.

The following proposition was suggested by C. Cuny.

Proposition 3.6. There exists a sequence {ak} of nonnegativenumbers satisfying

supn

a1+·· ·+ann

=A <.,

such that for every identically distributed non-null sequence {Xk}, thesequence 1n;n

k=1 akXk is not a.s. convergent.

172 Baxter, Jones, Lin, and Olsen

Page 9: SLLN for Weighted Independent Identically Distributed Random Variables

Proof. Let a2j=2 j, and ak=0 if j is not a power of 2. For2a [ n < 2a+1 we have

1n

Cn

k=1ak [

12a

Ca

j=02 j < 2.

Assume {Xk} is non-null identically distributed such that1n;n

k=1 akXkconverges a.s. Since 1n;n−1

k=1 akXk must converge to the same limit, we have1n anXn Q 0 a.s., which implies X2j Q 0 a.s. But for some d > 0 we haveP{|X2j | > d}=P{|X1 | > d} > 0, so {X2j} does not converge to 0 even inprobability, a contradiction. i

Remarks. 1. The sequence {ak} of the proposition shows thatTheorem 3.4 is not true for q=1, even for bounded i.i.d. sequences. Infact, there is not even convergence in probability.

2. The above sequence {ak} is not Besicovitch, since1n;n

k=1 ak doesnot converge. Note that any Besicovitch sequence {ak} yields a.s. conver-gence of 1n;n

k=1 akZk for any strictly stationary sequence with Z1 bounded.

Theorem 3.7. Let {Yn} be an identically distributed sequence on theprobability space (S, m), with E(|Y1 |) <.. Then there is a set SŒ … S withm(S−SŒ)=0, such that for every y ¥ SŒ the sequence defined by ak=Yk(y)has the following property:For every independent sequence {Xn} with sup n E(|Xn |p)=C <. for

some p > 1, and E(Xn)=0 for every n, the series ;.

k=1akXkk converges a.s.,

and 1n;nk=1 akXk Q 0 a.s.

Proof. If p > 2, the norm inequality || · ||2 [ || · ||p shows that we maytake p=2. Thus, we assume 1 < p [ 2. Put a=1/p, and let Yn=|Yn |p.Then {Yn} satisfies the hypotheses of Theorem 2.1, which then yields

C.

k=1

|Yk |p

kp=C

.

k=1

Ykk1/a<. a.s. on (S, m).

Hence there exists SŒ … S of full measure such that for y ¥ SŒ the sequenceak=Yk(y) satisfies ;.

k=1|ak|

p

kp <.. Hence

C.

k=1E 1 :ak

kXk :

p2=C.

k=1

|ak |p

kpE(|Xk |p) [ C C

.

k=1

|ak |p

kp<..

Since 1 < p [ 2, Theorem 3.1 yields that ;.

k=1akXkk converges a.s.

Kronecker’s Lemma yields that 1n;nk=1 akXk Q 0 a.s. i

SLLN for Weighted Independent Identically Distributed Random Variables 173

Page 10: SLLN for Weighted Independent Identically Distributed Random Variables

Remark. The sequence {Yk} in the theorem need not be stationary!The theorem obviously applies to {Xn} i.i.d. with E(|X1 |p) <. andE(X1)=0.

Corollary 3.8. Let {Yn} be a strictly stationary sequence on theprobability space (S, m), with E(|Y1 |) <.. Then there is a set SŒ … S withm(S−SŒ)=0, such that for every y ¥ SŒ the sequence defined by ak=Yk(y)has the following property:For every i.i.d. sequence {Xn} with E(|X1 |p) <. for some p > 1,

1n;n

k=1 akXk converges a.s. If {Yn} is an ergodic sequence, then the a.s.limit is E(Y1) E(X1).

Proof. By the ergodic theorem, 1n;nk=1 Yk converges a.e. on (S, m),

and if {Yn} is ergodic the limit is E(Y1). We add these conditions to thedefinition of the set SŒ in the proof of the theorem. The theorem appliesto {Xn−E(X1)}, and the additional restirctions yield the needed conver-gence for constants, along with the identification of the limit in the ergodiccase. i

Remark. If {Yn} in the corollary is ergodic, then for y ¥ SŒ we have1n;n

k=1 akXk Q E(Y1) E(X1) in L1-norm for every i.i.d. sequence {Xn} withE(|X1 |) <., by Proposition 1.5 and Theorem 2.7 of Lin et al. (16) (the iden-tification of the limit follows from the ergodic theorem in the productspace). Note that convergence in probability can be proved using Pruitt.(19)

Theorem 3.9. Let {Yn} be a sequence of random variables on theprobability space (S, m), with sup n E(|Yn |q) <. for some q > 1. Then thereis a set SŒ … S with m(S−SŒ)=0, such that for every y ¥ SŒ the sequencedefined by ak=Yk(y) has the following property:For every independent sequence {Xn} with sup n E(|Xn | r) <. for some

r > 1 and E(Xn)=0 for every n, and for any a ¥ (1

min{q, r, 2} , 1], the series;.

k=1akXkka converges a.s., and

1na ;

nk=1 akXk Q 0 a.s.

Proof. For the proof we can clearly assume r=q [ 2, and then wetake a > 1/r. Then

F C.

n=1

|Yn | r

nar=C

.

n=1

E(|Yn | r)nar

[ supkE(|Yk | r) C

.

n=1

1nar<.,

so ;.

n=1|Yn|

r

nar <. a.e. Taking 1/r < aj < 1 with aj decreasing to 1/r, weobtain SŒ … S of full m-measure such that for y ¥ SŒ the sequence ak=Yk(y)satisfies ;.

n=1|an|

r

nar <. for every a > 1/r. For any {Xn} satisfying the

174 Baxter, Jones, Lin, and Olsen

Page 11: SLLN for Weighted Independent Identically Distributed Random Variables

hypotheses, the a.s. convergence of the series ;.

k=1akXkka now follows by

Theorem 3.1, since

C.

n=1E 1 :anXn

na: r2=C

.

n=1

|an | r

narE(|Xn | r) <..

Kronecker’s lemma finishes the proof. i

Remarks. 1. The sequence {Yn} in the previous theorem need not beidentically distributed.

2. If in Corollary 3.8 {Yn} has a higher moment, the previoustheorem yields a rate of convergence in the weighted SLLN.

Theorem 3.10. Let {Xn} be a sequence of independent identicallydistributed nonnegative random variables on a probability space (W, m).If X1 is not essentially bounded, then there is a nonnegative sequence {an}satisfying

limnQ.

a1+a2+·· ·+ann

=0,

such that

lim supnQ.

1n(a1X1+a2X2+·· ·+anXn)=+. a.s. (3.2)

and

lim infnQ.

1n(a1X1+a2X2+·· ·+anXn)=0 a.s. (3.3)

Proof. First construct a nondecreasing sequence {f(k)} tending to .,such that f(1) \ 1 and Ak :={X1 > f(k)} satisfies ;.

k=1 m(Ak)=.. SinceX1 is not essentially bounded, such a sequence can always be constructed,letting f(k) be constant for long blocks, if necessary.We construct inductively an increasing sequence of integers {nk}

and the values {ank}. Let n1=1 and a1=1. In general, if n1, n2,..., nk−1and an1 , an2 ,..., ank−1 have already been selected, find n

k so large thatn −k > 2nk−1,

ank−1n −k<12k

SLLN for Weighted Independent Identically Distributed Random Variables 175

Page 12: SLLN for Weighted Independent Identically Distributed Random Variables

and

m 3w: 1n −k

Ck−1

j=1anjXnj <

12k4 > 1− 1

2k. (3.4)

Define nk=n−

k+1 and ank=nk`f(k). For j not in the constructed sequence

{nk} let aj=0.We have

1nL

CnL

j=1aj=

1nL

CL−1

k=1ank+

1nL

nL`f(L)

=CL−1

k=1

anknk+1

nk+1nL+

1

`f(L)

[ CL−1

k=1

12k+1

12L−k−1

+1

`f(L)[L2L+

1

`f(L)||QLQ.

0

since f(L) tends to infinity. Thus, the Cesaro averages 1nL ;nLj=1 aj converge

to zero. For nL [N< nL+1 we have1N;

Nj=1 aj [

1nL;nLj=1 aj, so the full

sequence of Cesaro averages converges to 0.Now consider

1nL(a1X1+a2X2+·· ·+anLXnL )=

1nL

CL−1

k=1ankXnk+

anLXnLnL

. (3.5)

The last term in(3.5) isXnL/`f(L). Putting BL={XnL/`f(L) >`f(L)}={XnL > f(L)}, we have m(BL)=m(AL). By construction ;.

L=1 m(BL)=;.

L=1 m(AL)=.. By independence, the Borel–Cantelli lemma yields thata.e. w ¥ W is in infinitely many BL, so lim supLXnL/`f(L)=. a.s. Since{ak} and the random variables are nonnegative, (3.2) follows from (3.5).Since nL=n

L+1, summation till n−

L avoids the last term in (3.5), so

1n −L

CnŒL

j=1ajXj=

1n −L

CL−1

k=1ankXnk ,

which, by (3.4), is less than 2−L for w in a set CL with m(CL) > 1−12L.

Hence ;.

L=1 m(W−CL) <., and the Borel–Cantelli lemma shows thatalmost every w is eventually in all CL, which proves (3.3). i

Remarks. 1. The a.s. nonconvergence of the theorem applies also tothe sequence {Xn−E(X1)}. On the other hand,

1n;n

k=1 akXk Q 0 in prob-ability, since {ak} obviously satisfies ||

1n;n

k=1 akZk ||1 Q 0 for every identi-cally distributed (not necessarily stationary) {Zk} with E(|Z1 |) <..2. The sequence {an} constructed for {Xn} in the theorem is

obviously Besicovitch (approximated by the 0 trigonometric polynomial).

176 Baxter, Jones, Lin, and Olsen

Page 13: SLLN for Weighted Independent Identically Distributed Random Variables

3. Proposition 3.6 shows that Theorem 3.4 is not true for q=1. Ourtheorem shows its failure even if we take a Besicovitch sequence, which hasthe desired convergence in L1-norm.

Let X1 be a random variable with Pr{X1=j}=2−j for j=1, 2,..., soE(Xp1) <. for every 1 [ p <., and let {Xn} be i.i.d. with the above dis-tribution. Though the sequence {an} constructed for {Xn} in the previoustheorem is Besicovitch, for every 1 [ p <. the a.s. convergence of1n;n

k=1 akZk may fail for some i.i.d. sequence {Zk} with finite absolutepth moment (namely, for {Xn}). Note that since {ak} is Besicovitch,1n;n

k=1 akZk converges a.s. for every strictly stationary {Zn} with Z1bounded.

4. PROBLEMS

(I) Recall that a q-Besicovitch sequence {an} (with 1 [ q <.)yields a weighted ergodic theorem for every strictly stationary {Zk} withE(|Zk |p) <., provided p \ q/(q−1). Theorem 3.10 shows that for q=1one cannot relax the assumption on p (even for i.i.d.). A natural problemis therefore whether also for 1 < q <. the above relationship 1p+

1q [ 1 is

necessary in general; Theorem 3.4 shows that it is not for i.i.d.

(II) Corollaries 3.5 and 3.8 have some higher moment assumptions.This raises the following problem: Is Corollary 3.5 true also for q=1,although Theorem 3.4 fails in that case? This is equivalent to askingwhether Corollary 3.8 is true also for p=1.

We present below an improvement of Corollary 3.5, but the generalanswer is still unknown.

Theorem 4.1. Let {Yn} be a strictly stationary sequence on theprobability space (S, m), with E(|Y1 | log+ |Y1 |) <.. Then there is a setSŒ … S with m(S−SŒ)=0, such that for every y ¥ SŒ the sequence defined byak=Yk(y) has the following property:For every i.i.d. sequence {Xn} with E(|X1 |) <., the sequence

1n;n

k=1 akXk converges a.s. If {Yn} is an ergodic sequence, then the a.s.limit is E(Y1) E(X1).

Proof. The proof uses simplifications of some arguments ofAssani.(3, 5) For a sequence {an} of nonnegative weights let AN=;n

i=1 a i.Jamison et al. (12) proved that

1AN

CN

k=1akXk Q E(X1)

SLLN for Weighted Independent Identically Distributed Random Variables 177

Page 14: SLLN for Weighted Independent Identically Distributed Random Variables

for every i.i.d. sequence {Xn} with E(|X1 |) <., if and only if

supn

Nnn<. where Nn=# 3k:

akAk>1n4 . (4.1)

It will be helpful to use the representation Yk=g p s, where s ismeasure preserving on (S, m). It is enough to prove the theorem for g \ 0.By the SLLN, it is enough to prove the theorem for g+1. Thus, we mayand do assume g \ 1 a.e. The assumption on Y1 yields that g ¥ L1(m), soby the ergodic theorem 1

n;nk=1 g(s

ky)Q E(g |SI)(y) a.e. (where SI is thes-algebra of invariant sets of s). If condition (4.1) is satisfied with ak=g(sky), we’ll get the desired result, since EI(g) :=E(g |SI) > 0 a.e.Since g(sky)

;kj=1 g(sjy)Q 0 a.e., we have

Nn(g, y) :=# 3k:g(sky)

;kj=1 g(s

jy)>1n4 <..

For a.e. y ¥ S, there is an integer k0=k0(y) such that if k > k0, then

0 <EI(g)(y)2

[1k

Ck

j=1g(s jy) [ 2EI(g)(y).

Hence for a.e. y ¥ S we have

Nn(g, y)=# 3k:g(sky)

k( 1k;kj=1 g(s

jy))>1n4

=# 3k [ k0 :g(sky)

k( 1k;kj=1 g(s

jy))>1n4

+# 3k > k0 :g(sky)

k( 1k;kj=1 g(s

jy))>1n4

[ k0+# 3k > k0 :g(sky)k>EI(g)(y)2n4

[ k0+# 3k:g(sky)k>EI(g)(y)2n4 .

Thus it is enough to prove sup n1n #{k:

g(sky)k >

EI(g)(y)2n } <. a.e.

178 Baxter, Jones, Lin, and Olsen

Page 15: SLLN for Weighted Independent Identically Distributed Random Variables

Define A i :={2 i−1 [ g < 2 i}; since g \ 1, {A i}.

1 is a partition of S.The function f=;.

i=1 2iqAi satisfies

12 f [ g [ f a.e., so also f ¥ L log+L,

and 12 EI(f) [ EI(g) [ EI(f) a.e. Hence

3k: g(sky)k>EI(g)(y)2n4 … 3k: f(s

ky)k>EI(f)(y)4n4 .

Hence it will be enough to show that with a(y) :=EI(f)(y)/4, the func-tion f satisfies a.e.

supn

1n# 3k: f(s

ky)k>a(y)n4 <..

Since {Ai} are disjoint, for any given number a > 0 and y ¥ S we have

supn

1n# 3k: f(s

ky)k>a

n4=sup

n

1n# 3k:;

.

i=1 2iqAi (s

ky)k

>a

n4

=supn

1n

C.

i=1# 3k: 2

iqAi (sky)

k>a

n4

=supn

1n

C.

i=1# 3k: qAi (sky) >

kan2 i4

[ supn

C.

i=1

1n

C[2 in/a]

k=1qAi (s

ky)

=supn

C.

i=1

2 i

a

a

2 inC[2 in/a]

k=1qAi (s

ky)

[1a

C.

i=12 iqa

Ai(y),

where qaA (y) :=sup n

1n;n

k=1 qA(sky).

Thus, if we prove ;.

i=1 2iqaAi(y) <. a.e., then for a.e. y the needed

condition (4.1) will be satisfied, and we will have the a.s. convergence of theweighted i.i.d. sequences.To finish the proof of the theorem, we show that ||;.

i=1 2iqaAi||1 <..

Let pi=1+1i=

i+1i . Then

1pi= ii+1=1−

1i+1 , and pi/(pi−1) [ 2i. We use the

dominated ergodic theorem (e.g., p. 52 in Krengel(14)) to obtain

SLLN for Weighted Independent Identically Distributed Random Variables 179

Page 16: SLLN for Weighted Independent Identically Distributed Random Variables

> C.

i=12 iqa

Ai>1=C

.

i=12 i ||qa

Ai||1 [ C

.

i=12 i ||qa

Ai||pi

[ C.

i=12 ipipi−1

||qAi ||pi [ C.

i=12 ipipi−1

m(A i)1/pi

[ C.

i=12 i 2i m(Ai)1−1/(i+1)

Let K={i: m(A i) < 4−i}. We will break the last sum into two piecesdepending on whether or not i ¥K.

Ci ¨Ki2 im(A i)1−1/(i+1) [ C

i ¨Ki2 im(Ai) m(A i)−1/(i+1)

[ Ci ¨Ki2 im(Ai) 4 i/(i+1)

[ 4 Ci ¨Ki2 im(A i) [ c F

Sf log+f dm <..

For the remaining terms, we have

Ci ¥Ki2 im(A i)1−1/(i+1) [ C

i ¥Ki4 i/24−i(1−1/(i+1)) [ c C

i ¥Ki4−i/2+i/(i+1) <.. i

Proposition 3.6 shows that Theorem 3.4 is false when q=1. The pre-vious theorem raises the question whether in Theorem 3.4 it is possible toreplace condition (3.1), with q > 1, by the weaker assumption

supn

;nk=1 |ak | log

+ |ak |n

<..

ACKNOWLEDGMENTS

We are grateful for the hospitality and support offered by NorthDakota State University to the third author, and by the University ofMinnesota and Ben-Gurion University to the last author. The last author’stravel to Israel was partially supported by an NDSU grant from the VPof Academic Affairs. Research of J. Olsen partially supported byND-EPSCoR through NSF Grant EPS-9874802.

180 Baxter, Jones, Lin, and Olsen

Page 17: SLLN for Weighted Independent Identically Distributed Random Variables

REFERENCES

1. Aaronson, J. (1977). On the ergodic theory of non-integrable functions and infinitemeasure spaces. Israel J. Math. 27, 163–173.

2. Akcoglu, M. A. (1975). A pointwise ergodic theorem in Lp spaces. Canada J. Math. 27,1075–1082.

3. Assani, I. (1997). Strong laws for weighted sums of independent identically distributedrandom variables. Duke Math. J. 88, 217–246.

4. Assani, I. (1997). Convergence of the p-series for stationary sequences. New York J. Math.3A, 15–30.

5. Assani, I. (1998). A weighted pointwise ergodic theorem. Ann. Inst. H. Poincaré Probab.Statist. 34, 139–150.

6. Chow, Y. S. (1965). Local convergence of martingales and the law of large numbers. Ann.Math. Stat. 36, 552–558.

7. Chow, Y. S., and Lai, T. L. (1973). Limiting behavior of weighted sums of independentrandom variables. Ann. Probab. 1, 810–824.

8. Chung, K. L. (1947). Note on some strong laws of large numbers. Amer. J. Math. 69,189–192.

9. Cohen, G., and Lin, M. (2003). Laws of large numbers with rates and the one-sidedergodic Hilbert transform. Illinois J. Math. 47, 997–1031.

10. Cuzick, J. (1995). A strong law for weighted sums of i.i.d. random variables. J. Theoret.Probab. 8, 625–636; Erratum. J. Theoret. Probab. 14 (2001).

11. Derriennic, Y., and Lin, M. (2001). Fractional Poisson equations and ergodic theoremsfor fractional coboundaries. Israel J. Math. 123, 93–130.

12. Jamison, B., Orey, S., and Pruitt, W. (1965). Convergence of weighted averages of inde-pendent random variables. Z. Wahrsch. Verw. Gebiete 4, 40–44.

13. Jones, R. L. (1977). Inequalities for the ergodic maximal function. Studia Math. 60,111–129.

14. Krengel, U. (1985). Ergodic Theorems, de Gruyter, Berlin.15. Landers, D., and Rogge, L. (1997). Laws of large numbers for uncorrelated Cesaro uni-formly integrable random variables. Sankhya Ser. A 59, 301–310.

16. Lin, M., Olsen, J., and Tempelman, A. (1999). On modulated ergodic theorems forDunford–Schwartz operators. Illinois J. Math. 43, 542–567.

17. Marcinkiewicz, J., and Zygmund, A. (1937). Sur les fonctions indépendentes. Fund. Math.29, 60–90.

18. Petersen, K. (1983). Ergodic Theory, Cambridge University Press, Cambridge.19. Pruitt, W. (1966). Summability of independent random variables. J. Math. Mech. 15,769–776.

20. Rudolph, D. (1994). A joinings proof of Bourgain’s return time theorem. Ergodic TheoryDynam. Systems 14, 197–203.

21. Ryll-Nardzewski, C. (1975). Topics in Ergodic Theory, Springer Lecture Notes in Math.,Vol. 472, pp. 131–156.

22. Sawyer, S. (1966). Maximal inequalities of weak type. Ann. Math. 84, 157–174.23. Stout, W. (1974). Almost Sure Convergence, Academic Press, New York.24. Tempelman, A. (1974). Ergodic theorems for amplitude modulated homogeneous randomfields. Lithuanian J. Math. 14, 221–229 (in Russian; English translation (1975): Lith.Math. Transl. 14, 698–704).

SLLN for Weighted Independent Identically Distributed Random Variables 181