document os de trab ajo · 1 . as a result the estimation of a negative memory parameter of zt is...

Post on 10-Jun-2020

4 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

DOCUMENTOS DE TRABAJO

BILTOKI

Facultad de Ciencias Economicas.Avda. Lehendakari Aguirre, 83

48015 BILBAO.

D.T. 2005.02

Semiparametric estimation in perturbed long memory series.

Josu Arteche

Documento de Trabajo BILTOKI DT2005.02Editado por el Departamento de Economıa Aplicada III (Econometrıa y Estadıstica)de la Universidad del Paıs Vasco.

Deposito Legal No.: BI-1540-05ISSN: 1134-8984

Semiparametric estimation in perturbed long memory series

J. Arteche∗†

University of the Basque Country

29th May 2005

Abstract

The estimation of the memory parameter in perturbed long memory series has re-cently attracted attention motivated especially by the strong persistence of the volatilityin many financial and economic time series and the use of Long Memory in StochasticVolatility (LMSV) processes to model such a behaviour. This paper discusses frequencydomain semiparametric estimation of the memory parameter and proposes an extensionof the log periodogram regression which explicitly accounts for the added noise, compar-ing it, asymptotically and in finite samples, with similar extant techniques. Contraryto the non linear log periodogram regression of Sun and Phillips (2003), we do not usea linear approximation of the logarithmic term which accounts for the added noise. Areduction of the asymptotic bias is achieved in this way and makes possible a fasterconvergence in long memory signal plus noise series by permitting a larger bandwidth.Monte Carlo results confirm the bias reduction but at the cost of a higher variability. Anapplication to a series of returns of the Spanish Ibex35 stock index is finally included.

Keywords: long memory, stochastic volatility, semiparametric estimation.

∗Research supported by the University of the Basque Country grant 9/UPV 00038.321-13503/2001 andthe Spanish Ministerio de Ciencia y Tecnologıa and FEDER grant BEC2003-02028.

†Corresponding address: Departamento de Econometrıa y Estadıstica; University of the Basque Country(UPV-EHU); Bilbao 48015, Spain; Tl: (34) 94 601 3852; Fax: (34) 94 601 3754; Email: josu.arteche@ehu.es.

1

1 Introduction

The estimation of the memory parameter in perturbed long memory processes has recently

received considerable attention motivated especially by the strong persistence found in the

volatility of many financial and economic time series. Alternatively to the different exten-

sions of ARCH and GARCH processes, the Long Memory in Stochastic Volatility (LMSV)

has proved an useful tool to model such a strong persistent volatility. A logarithmic transfor-

mation of the squared series becomes a long memory process perturbed by an additive noise

where the long memory signal corresponds to the volatility of the original series. As a result

estimation of the memory parameter of the volatility component corresponds to a problem

of estimation in a long memory signal plus noise model. Several estimation techniques have

been proposed in this context (Harvey(1998), Breidt et al.(1998), Deo and Hurvich (2001),

Sun and Phillips (2003), Arteche (2004), Hurvich et al. (2005)).

The perturbed long memory series recently considered in the literature are of the form,

zt = µ + yt + ut (1)

where µ is a finite constant, ut is a weakly dependent process with a spectral density fu(λ)

that is continuous on [−π, π], bounded above and away from zero, and yt is a long memory

(LM) process characterized by a spectral density function satisfying

fy(λ) = Cλ−2d0(1 + O(λα)) as λ → 0 (2)

for a positive finite constant C, α ∈ [1, 2] and 0 < d0 < 0.5. The LMSV model considers ut

a non normal white noise but in a more general signal plus noise ut can be a serially weakly

dependent process as in Sun and Phillips (2003) and Arteche (2004). The constant α is a

spectral smoothness parameter which determines the adequacy of the local specification of

the spectral density of yt at frequencies around the origin. The interval 1 ≤ α ≤ 2 covers

the most interesting situations. In parametric standard LM processes, such as the fractional

ARIMA, α = 2 and α = 1 in the seasonal or cyclical long memory processes described in

Arteche and Robinson (1999) if the long memory takes part at some frequency different

from 0. The condition of positive memory 0 < d0 < 0.5 is usually imposed when dealing

with frequency domain estimation in perturbed long memory processes and guarantees the

asymptotic equivalence between spectral densities of yt and zt. Otherwise the memory of zt

2

corresponds to that of the noise (d0 = 0). For ut uncorrelated with yt the spectral density

of zt is

fz(λ) = fy(λ) + fu(λ) = Cλ−2d0(1 + O(λα)) + fu(λ) ∼ Cλ−2d0

(1 +

fu(0)C

λ2d0 + O(λα))

(3)

as λ → 0 and zt inherits the memory properties of yt in the sense that both share the same

memory parameter d0. However the spectral smoothness parameter changes and for zt is

min2d0, α = 2d0.

The semiparametric estimators considered in this paper are based on the minimization

of some function of the difference between the periodogram and the local specification of the

spectral density in (3). The periodogram of zt does not approximate accurately Cλ−2d0 and

this causes a bias which translates into the different estimators. This is discussed in Section

2. As a result estimation techniques have been proposed that consider explicitly the added

noise in the local specification of the spectral density of zt. They are described in Section

3. Section 4 proposes an estimator based on an extension of the log periodogram regression

and establishes its asymptotic properties. Section 5 compares the “optimal” bandwidths

defined as the minimizers of an approximation of the mean square error of the different

semiparametric estimators considered. The performance in finite sample perturbed LM

series is discussed in Section 6 by means of Monte Carlo. Section 7 shows an application to

a series of returns of the Spanish Ibex35 stock index. Finally section 8 concludes. Technical

details are placed in the Appendix.

2 Periodogram and local specification of the spectral density

Define

Izj = Iz(λj) =1

2πn

∣∣∣∣∣n∑

t=1

zt exp(−iλjt)

∣∣∣∣∣2

the periodogram of the series zt, t = 1, ..., n, at Fourier frequency λj = 2πj/n. The proper-

ties of several semiparametric estimators of d0 depend on the adequacy of the approximation

of the periodogram to the local specification of the spectral density. Hurvich and Beltrao

(1993), Robinson (1995a) and Arteche and Velasco (2005) in an asymmetric long memory

context, observed that the asymptotic relative bias of the periodogram produces the bias

typically encountered in semiparametric estimates of the memory parameters.

3

Deo and Hurvich (2001), Crato and Ray (2002) and Arteche (2004) detected that the

bias is quite severe in perturbed long memory series if the added noise is not explicitly

considered in the estimation. It is then relevant to analyze the asymptotic bias of Izj as

an approximation of the local specification of the spectral density when the added noise is

ignored.

Consider the following assumptions:

A.1: zt in (1) is a long memory signal plus noise process with yt an LM process with

spectral density function in (2) with d0 < 0.5 and ut is stationary with positive and bounded

continuous spectral density function fu(λ).

A.2: yt and ut are independent.

Theorem 1 Let zt satisfy assumptions A.1 and A.2 and define

Ln(j) = E

[Izj

Cλ−2d0j

].

Then, considering j fixed:

Ln(j) = A1n(j) + A2n(j) + o(n−2d0)

where

limn→∞A1n(j) =

∫ ∞

−∞ψj(λ)

∣∣∣∣λ

2πj

∣∣∣∣−2d0

and

limn→∞n2d0A2n(j) =

∫ ∞

−∞ψj(λ)

fu(0)C(2πj)−2d0

where

ψj(λ) =2π

sin2 λ2

(2πj − λ)2.

Remark 1: The influence of the added noise turns up in A2n(j) and is thus asymptotically

negligible if d0 > 0. However for finite n A2n(j) can be quite large if d0 is low and/or the long

run noise to signal ratio (nsr) fu(0)/C is large. This produces the high bias of traditional

semiparametric estimators which ignore the added noise in perturbed LM series and justify

the modifications recently proposed and described in the next section.

Remark 2: In the LMSV case fu(0) = σ2ξ/2π. The influence of the noise is clear here,

the larger the variance of the noise the higher the relative bias of the periodogram.

4

Remark 3: When d0 < 0 the bias diverges as n increases. This result was expected since

the memory of zt corresponds in this case to the memory of the noise. Then Ln(j) diverges

because we normalize the periodogram by a quantity that goes to zero as n → ∞. As a

result the estimation of a negative memory parameter of zt is not straightforward as noted

by Deo and Hurvich (2001) and Arteche (2004).

Remark 4: When j = j(n) is a sequence of positive integers such that j/n → 0 as

n → ∞, a straightforward extension of Theorem 2 in Robinson (1995a) shows that under

assumptions A.1 and A.2

Ln(j) = 1 + O

(log j

j+ λ

min(α,2d0)j

)

noting that

fz(λj)− Cλ−2d0j = fy(λj) + fu(λj)− Cλ−2d0

j

and by assumption A.1,fz(λj)

Cλ−2d0j

= 1 + O(λ

min(α,2d0)j

).

3 Semiparametric estimation of the memory parameter

Let d0 be the true unknown memory parameter and d any admissible value and consider

hereafter the same notation for the rest of parameters to be estimated. The version of

Robinson (1995a) of the log periodogram regression estimator (LPE), dLPE , is based on the

least squares regression

log Izj = a + d(−2 log λj) + vj , j = 1, ..., m,

where m is the bandwidth such that at least m−1 + mn−1 → 0 as n → ∞. The original

regressor proposed by Geweke and Porter-Hudak was −2 log(2 sin λj

2 ) instead of −2 log λj

but both are asymptotically equivalent and the differences between using one or another are

minimal. The motivation of this estimator is the log linearization in (3) such that

log Izj = a + d0(−2 log λj) + Uzj + O(λ2d0j ), j = 1, 2, ..., m, (4)

where a = log C − c, c = 0.577216... is Euler’s constant and Uzj = log(Izjf−1z (λj)) + c.

The bias of the least squares estimate of d0 is dominated by the O(λ2d0j ) term which is not

explicitly considered in the regression such that a negative bias of order O(λ2d0m ) arises which

5

can be quite severe if d0 is low. Deo and Hurvich (2001) also show that√

m(dLPE − d0)d→

N(0, π2/24) as long as m = κnς for ς < 4d0/(4d0 + 1) and κ is hereafter a generic positive

constant which can be different in every case.

The main rival semiparametric estimator of the LPE is the local Whittle or Gaussian

semiparametric estimator (GSE), dGSE , proposed by Robinson (1995b) and defined as the

minimizer of

R(d) = log C(d)− 2d

m

m∑

j=1

log λj , C(d) =1m

m∑

j=1

λ2dj Izj (5)

over a compact set. This estimator has the computational disadvantage of requiring non-

linear optimization but it is more efficient than the log periodogram regression. However

both share important affinities as described in Robinson and Henry (2003). Again the bias

can be approximated by a term of order O(λ2d0m ) which is caused by the added noise, and

√m(dGSE − d0)

d→ N(0, 1/4) as long as m = κnς for ς < 4d0/(4d0 + 1) (Arteche, 2004). As

in the LPE, this bandwidth restriction limits quite seriously the rate of convergence of the

estimators, especially if d0 is low.

In order to reduce the bias of the GSE, Hurvich et al. (2005), noting (3), suggested to

incorporate explicitly in the estimation procedure a βλ2dj term which accounts for the effect

of the added noise and proposed a modified Gaussian semiparametric estimator (MGSE)

defined as

(dMGSE , βMGSE) = arg min∆×Θ

R(d, β) (6)

where Θ = [0, Θ1], 0 < Θ1 < ∞, ∆ = [∆1, ∆2], 0 < ∆1 < ∆2 < 1/2,

R(d, β) = log

1

m

m∑

j=1

λ2dj Izj

1 + βλ2dj

+

1m

m∑

j=1

logλ−2dj (1 + βλ2d

j )

When ut is iid(0, σ2u) then fu(λ) = σ2

u(2π)−1 and β0 = σ2u(2πC)−1. The explicit con-

sideration of the noise in the estimation relaxes the upper bound of the bandwidth such

that√

m(dMGSE − d0)d→ N(0, Cd0/4) for Cd0 = 1 + (1 + 4d0)/4d2

0 as long as m = κnς for

ς < 2α/(2α + 1) which permits a larger m. When α = 2, as is typical in standard LM para-

metric models, dMGSE achieves a rate of convergence arbitrarily close to n2/5 which is the

upper bound of the rate of convergence of dGSE in the absence of additive noise. However

with an additive noise the best possible rate of convergence achieved by dGSE is n2d0/(4d0+1).

Regarding the bias of dMGSE , it can be approximated by a term of order O(λαm) instead of

6

O(λ2d0m ) which is the order of the bias of dGSE in the presence of an additive noise.

Sun and Phillips (2003) extended the log periodogram regression in a similar manner.

From (3)

log Izj = log C − c + d0(−2 log λj) + log(

1 +fu(λj)

Cλ2d0

j + O(λαj )

)+ Uzj

= log C − c + d0(−2 log λj) + log(

1 +fu(0)

Cλ2d0

j

)+ O(λα

j ) + Uzj (7)

= log C − c + d0(−2 log λj) +fu(0)

Cλ2d0

j + O(λα∗j ) + Uzj (8)

where α∗ = min(4d0, α). Noting (8) Sun and Phillips (2003) proposed the following non

linear regression

log Izj = a + d(−2 log λj) + βλ2dj + Uzj (9)

for β0 = fu(0)/C, such that the non linear log periodogram regression estimator (NLPE) is

defined as

(dNLPE , βNLPE) = arg min∆×Θ

m∑

j=1

(log∗ Izj + d(2 log λj)∗ − β(λ2dj )∗)2 (10)

where for a general ξt we use the notation ξ∗t = ξt− ξ where ξ =∑

ξt/n. The bias of dNLPE

is of order O(λα∗m ) which is largely produced by the O(λα∗

j ) omitted in the regression in (9).

Correspondingly√

m(dNLPE−d0)d→ N(0, π2

24Cd0) as long as m = κnς for ς < 2α∗/(2α∗+1).

Sun and Phillips (2003) consider the case α = 2 so that α∗ = 4d0 and the behaviour of m

is restricted to be O(n8d0/(8d0+1)) with a bias of dNLPE of order O(λ4d0m ). The upper bound

of m in the NLPE is higher than in the standard LPE but lower than in the MGSE when

α > 4d0. This is caused by the approximation of the logarithmic expression in (7). This

approach has been used by Andrews and Guggenberger (2003) in their bias reduced log

periodogram regression in order to get a linear regression model. However, the regression

model of Sun and Phillips (2003), although linear in β, is still non linear in d and the linear

approximation of the logarithmic expression does not imply a significant computational

advantage. Instead, noting (7) we propose the following non linear regression model

log Izj = a + d(−2 log λj) + log(1 + βλ2dj ) + Uzj (11)

which only leaves an O(λαj ) term out of explicit consideration. We call the estimator based

on a nonlinear least squares regression of (11) the augmented log periodogram regression

estimator (ALPE).

7

4 Augmented log periodogram regression

The augmented log periodogram regression estimator (ALPE) is defined as

(dALPE , βALPE) = arg min∆×Θ

Q(d, β) (12)

under the constraint β ≥ 0, where

Q(d, β) =m∑

j=1

(log∗ Izj + d(2 log λj)∗ − log∗(1 + βλ2dj ))2

Consider the following assumptions:

B.1: yt and ut are independent covariance stationary Gaussian processes.

B.2: When var(ut) > 0, fu(λ) is continuous on [−π, π], bounded above and away from

zero with bounded first derivative in a neighbourhood of zero.

B.3: The spectral density of yt satisfies

fy(λ) = Cλ−2d0(1 + Gλα + O(λα+ι))

for some ι > 0, finite positive C, finite G, 0 < d0 < 0.5 and α ∈ (4d0, 2]⋂

[1, 2].

Assumption B.1 excludes LMSV models where ut is not Gaussian but a log chi-square.

We impose B.1 for simplicity and to directly compare our results with those in Sun and

Phillips (2003). Considering recent results, Guassianity of signal and noise could be relaxed.

The hypothesis of Gaussianity of yt could be weakened as in Velasco (2000) and LMSV could

also be allowed as in Deo and Hurvich (2001). Assumption B.2 restricts the behaviour of

ut as in Assumption 1 in Sun and Phillips (2003). Assumption B.3 imposes a particular

spectral behaviour of yt around zero relaxing Assumption 2 in Sun and Phillips (2003). As

in Henry and Robinson (1996) this local specification permits to obtain the leading part of

the asymptotic bias of dALPE in terms of G. We restrict our analysis to the case α > 4d0

where the ALPE achieves a lower bias and higher asymptotic efficiency than the NLPE by

permitting a larger m. When α ≤ 4d0 the ALPE and the NLPE share the same asymptotic

distribution with the same upper bound of m. In the standard fractional ARIMA process

considered in Sun and Phillips (2003) α = 2 > 4d0.

Theorem 2 Under assumptions B.1-B.3, as n →∞ dALPE−d0 = op(1) if 1/m+m/n → 0,

and dALPE − d0 = Op((m/n)2d0), βALPE − β0 = op(1) if m/n + n4d0(1+δ)/m4d0(1+δ)+1 → 0

for some arbitrary small δ > 0.

8

This is the same result as the consistency of the NLPE in Theorem 2 in Sun and Phillips

(2003) and can be proved similarly noting that

1m

Q(d, β) =1m

m∑

j=1

U∗

zj + V ∗j + O(λα

j )2

for V ∗j = V ∗

j (d, β) = Vj(d, β)− V (d, β), Vj(d, β) = 2(d−d0) log λj +log(1+β0λ2d0j )− log(1+

βλ2dj ) and that log(1 + βλ2d

j ) = βλ2dj + O(λ4d

j ) for (d, β) ∈ ∆×Θ.

The main difference of the ALPE with respect to the NLPE lies in the asymptotic

distribution, particularly in the term responsible of the asymptotic bias. The first order

conditions of the minimization problem are

S(d, β) = (0, Λ)′

Λβ = 0

where Λ is the Lagrange multiplier pertaining to the constraints β ≥ 0 and

S(d, β) =m∑

j=1

(x∗1j(d, β)x∗2j(d, β)

)Wj(d, β)

with

x1j(d, β) = 2

(1− βλ2d

j

1 + βλ2dj

)log λj ,

x2j(d, β) = − λ2dj

1 + βλ2dj

,

Wj(d, β) = log∗ Izj + d(2 log λj)∗ − log∗(1 + βλ2dj )

The Hessian matrix H(d, β) has elements

H11(d, β) =m∑

j=1

(x∗1j)2 − 4β

m∑

j=1

Wj

(log λj)2λ2dj

(1 + βλ2dj )2

H12(d, β) =m∑

j=1

x∗1jx∗2j − 2

m∑

j=1

Wj

(log λj)λ2dj

(1 + βλ2dj )2

H22(d, β) =m∑

j=1

(x∗2j)2 +

m∑

j=1

Wj

λ4dj

(1 + βλ2dj )2

Define Dn = diag(√

m, λ2d0m

√m) and the matrix

Ω =

(4 − 4d0

(2d0+1)2

− 4d0(2d0+1)2

4d20

(4d0+1)(2d0+1)2

),

9

and consider the following assumptions

B.4: d0 is an interior point of ∆ and 0 ≤ β0 < Θ1.

B.5: As n →∞,mα+0.5

nα→ K

for some positive constant K.

The structure of the series, if perturbed or not, is not known beforehand. It is then

interesting to consider not only the case var(ut) > 0 but also the no added noise case,

var(ut) = 0, and analyze the behaviour of the ALPE in both situations.

Theorem 3 Let zt in (1) satisfy assumptions B.1-B.3 and m satisfy B.4. Then as n →∞

a) If var(ut) > 0

Dn

(dALPE − d0

βALPE − β0

)d→ N

(Ω−1b,

π2

6Ω−1

)

b) If var(ut) = 0

√m(dALPE − d0)

d→ −(Ω11η1 + Ω12η2)Ω12η1 + Ω22η2 ≤ 0 − Ω−111 η1Ω12η1 + Ω22η2 > 0

√mλ2d0

m (βALPE − β0)d→ −(Ω12η1 + Ω22η2)Ω12η1 + Ω22η2 ≤ 0

where Ω = (Ωij) = Ω−1, η = (η1, η2)′ ∼ N(−b, π2Ω/6) and

b = (2π)αK2

(− α

(1+α)2αd0

(2d0+α+1)(2d0+1)(1+α)

)G.

Sun and Phillips (2003) consider yt = (1 − L)−d0wt with a weak dependent wt such

that fz(λ) = (2 sin λ2 )−2d0(fw(λ) + (2 sin λ

2 )2d0fu(λ)) and then α = 2, C = fw(0), β0 =

fu(0)/fw(0) and G = (d0/6 + f ′′w(0)/fw(0)) /2. Whereas in Sun and Phillips (2003) the

term leading the asymptotic bias, b, is different when var(ut) = 0 and var(ut) > 0, we do

not need to discriminate both situations and in both cases the asymptotic bias is of the

same order. To eliminate this bias we have to choose a bandwidth of order o(nα/(α+0.5))

instead of that in assumption B.5.

When var(ut) > 0 the asymptotic bias of (dALPE , βALPE) can be approximated by

D−1n Ω−1bn = D−1

n Ω−1√mλαm2

(− α

(1+α)2αd0

(2d0+α+1)(2d0+1)(1+α)

)G

=λα

mα(2d0 + 1)G4d0(1 + α)2(2d0 + α + 1)

(α− 2d0

λ−2d0m

(2d0+1)(4d0+1)αd0

)

10

which for the processes considered in Sun and Phillips (2003) corresponds to the result in

their Remark 2 but with the bn of their σu = 0 case and correcting the rate of convergence

in the asymptotic bias of βNLPE and the fw(0)2/fu(0)2 term which should be fu(0)2/fw(0)2

in their formula (48). The asymptotic bias of dALPE can then be approximated by

ABias(dALPE) =(m

n

)αK0 where K0 =

(2π)αα(2d0 + 1)(α− 2d0)G4d0(1 + α)2(2d0 + α + 1)

In contrast to the LPE and NLPE, dALPE has an asymptotic positive bias which decreases

with d0. The asymptotic variance is

AV ar(dALPE) =π2

24mCd0

and consequently the asymptotic mean squared error can be approximated by

AMSE(dALPE) =π2

24mCd0 +

(m

n

)2αK2

0 .

5 Comparing “optimal” bandwidths

The role of the bandwidth on semiparametric memory parameter estimates is crucial to get

reliable estimates. A too large choice of m can induce a large bias whereas a too small m

generates a high variability of the estimates. An optimal choice of m is usually obtained

minimizing an approximate form of the mean square error (MSE). In this section we compare

the optimal bandwidths obtained in this way for the estimators considered above in the long

memory signal plus noise process characterized by assumptions B.1-B.3 with σ2u > 0.

By Sun and Phillips (2003, Theorem 1), the asymptotic bias of dLPE can be approxi-

mated by

ABias(dLPE) = −β0d0

(2d0 + 1)2λ2d0

m (13)

and considering the asymptotic variance π2/(24m) the bandwidth that minimizes the ap-

proximate MSE is

moptLPE =

[π2

24(2d0 + 1)4

(2π)4d0β204d3

0

] 14d0+1

n4d0

4d0+1 (14)

Using similar arguments to those employed by Henry and Robinson (1996) it is easy to

show that the asymptotic bias of dGSE can also be approximated by (13). In consequence

the optimal bandwidth is (Arteche, 2004)

moptGSE =

[14

(2d0 + 1)4

(2π)4d0β204d3

0

] 14d0+1

n4d0

4d0+1 =

(AV ar(dGSE)

AV ar(dLPE)

) 14d0+1

moptLPE (15)

11

and since AV ar(dGSE) < AV ar(dLPE) then moptGSE < mopt

LPE .

Similarly the optimal bandwidth of the NLPE is given by Sun and Phillips (2003)

moptNLPE =

[π2Cd0(4d0 + 1)4(6d0 + 1)2

192d30(2d0 + 1)2(2π)8d0β4

0

] 18d0+1

n8d0

8d0+1 (16)

The ALPE share the same asymptotic variance as the NLPE but the lower order bias

produces a higher optimal bandwidth. Minimizing AMSE(dALPE) the optimal bandwidth

is

moptALPE =

(π2Cd0

48αK20

) 12α+1

n2α

2α+1 .

The optimal bandwidth of the ALPE increases with n faster than moptNLPE . Correspond-

ingly AMSE(dALPE) with moptALPE converges to zero at a rate n−2α/(2α+1) which is faster that

the n−4d0/(4d0+1) rate of dLPE with moptLPE and faster than the n−8d0/(8d0+1) rate achieved

by dNLPE with moptNLPE if α > 4d0 (as in the usual α = 2 case).

The ALPE is comparable in terms of optimal bandwidth and bias with the MGSE. In

fact, using similar similar arguments to those suggested by Henry and Robinson (1996) it is

straightforward to show that the bias of dMGSE can be approximated by that of dALPE

ABias(dMGSE) = ABias(dALPE) =(m

n

)αK0

and then

moptMGSE =

(AV ar(dMGSE)

AV ar(dALPE)

) 14d0+1

moptALPE =

(Cd0

8αK20

) 12α+1

n2α

2α+1 .

Contrary to dLPE , dGSE and dNLPE , the asymptotic bias of dALPE and dMGSE do not

depend on β0 and consequently moptALPE and mopt

MGSE are invariant to different values of nsr

fu(0)/C.

6 Finite sample performance

Deo and Hurvich (2001), Crato and Ray (2002) and Arteche (2004) have shown that the

bias in perturbed LM series of dLPE and dGSE is very high and increases considerably with

m, especially when the nsr is large. Consequently a very low bandwidth should be used to

get reliable estimates, at least in terms of bias. A substantial bias reduction is achieved by

12

including the added noise explicitly in the estimation procedure as in dNLPE , dALPE and

dMGSE . We compare the finite sample performance of these estimators in a LMSV

zt = yt + ut

for (1− L)d0yt = wt and ut = log ε2t , for εt and wt independent, εt is standard normal and

wt ∼ N(0, σ2w) for σ2

w = 0.5, 0.1. We have chosen these low variances because they are close

to the values that have been empirically found when a LMSV model is fitted to financial

time series (e.g. Breidt et al. (1998), Perez and Ruiz (2001)). These values correspond

to long run nsr fu(0)/fw(0) = π2, 5π2. The first one is close to the ratios considered in

Deo and Hurvich (2001), Sun and Phillips (2003) and Hurvich and Ray (2003). The second

corresponds more closely to the values found in financial time series. We consider d0 = 0.2,

0.45 and 0.8. For d0 = 0.8 the process is not stationary and is even larger than 0.75 so that

the proof of the asymptotic normality of dMGSE in Hurvich et al. (2005) does not apply.

However the estimators are expected to perform well as long as d0 < 1 (Sun and Phillips,

2003). Also, since εt is standard normal, ut is a log χ21 and assumption B.1 does not hold.

However we consider relevant to show that these estimators can be applied in LMSV models

which are an essential tool in the modelling of financial time series, and justify in that way

our conjecture of no necessity of Gaussianity of the added noise.

The Monte Carlo is carried out over 1000 replications in SPlus 2000, generating yt with

the option arima.fracdiff.sim and for the different non linear optimizations we use nlminb

for 0.01 < d < 1 and exp(−20) < β < exp(8) providing the gradient and the hessian. We

consider sample sizes n = 1024, 4096 and 8192 which are comparable with the size of many

financial series and permits the exact use of the Fast Fourier Transform. For each sample

size we take four different bandwidths m = [n0.4], [n0.6], [n0.8] and moptest for est = LPE,

NLPE, ALPE, GSE and MGSE with the constraint 5 ≤ moptest ≤ [n/2 − 1]. Table 1

displays moptest for the different values of d0, n and σ2

w. The lower constraint applies for the

LPE and GSE for low d0 and/or σ2w and also for the NLPE for d0 = 0.2 and σ2

w = 0.1. The

upper limit is applicable for the ALPE and MGSE with the lower sample size. Note that

moptALPE and mopt

MGSE do not depend on the nsr.

TABLES 1 AND 2 ABOUT HERE

Table 2 shows the bias and MSE of the estimators across the models considered. The

following conclusions can be deduced:

13

• The bias of the LPE and GSE is very high, especially for a large bandwidth and nsr.

The bias clearly reduces with the estimation techniques which account for the added

noise.

• In terms of bias the NLPE tends to be overcome by the ALPE and MGSE especially

for the high nsr case. The bias of the ALPE and MGSE is more invariant to different

values of the nsr and more stable with the bandwidth while a large choice of m produces

an extremely high bias of the NLPE. The NLPE tends to beat both ALPE and MGSE

in terms of MSE for an appropriate choice of m and low values of d0. In any other

case dALPE and dMGSE are better choices.

• Regarding the behaviour of the different estimators using the “optimal” bandwidth,

the best performance in terms of MSE corresponds to the MGSE which has the lowest

MSE in 16 out of 18 cases, followed by the ALPE which has lower MSE than the

NLPE, GSE and LPE in 13 out of 18 cases. Only for d0 = 0.2 and d0 = 0.45 with

n = 1024 the ALPE is overwhelmed by the LPE, GSE or NLPE. It deserves special

mention the situation for d0 = 0.2 and n = 1024 since here the LPE and GSE are the

best choices. This was somehow expected because for such a low value of d there is

not much scope for bias and also the estimates are constrained to be larger than 0.01

limiting the size of the bias. For d0 = 0.45, 0.85 the MGSE and the ALPE have a

lower MSE than the LPE, GSE and NLPE (only for d0 = 0.45, n = 1024 and σ2w = 0.5

the NLPE has a lower MSE than the ALPE).

• The “optimal” bandwidth performs better than the other three bandwidths for the

ALPE and MGSE suggesting that a large m should be chosen. However the NLPE

tends to have lower MSE with m = n0.6 in those cases where n0.6 is larger than moptNLPE

which occurs in every case when n = 1024, and for n = 4096 and n = 8192 except

when d0 = 0.8 and σ2w = 0.5, suggesting that mopt

NLPE tends to be undervalued.

We also compute the coverage probabilities of the nominal 90% confidence intervals ob-

tained with the five estimators using the asymptotic normality of all of them (although

this is not true for d0 = 0.8 we keep the normality assumption for comparative purposes).

For each we use two different standard errors. First we use the variance in the asymp-

totic distributions. For dLPE and dGSE these are π2/(24m) and 1/(4m). The rest of

14

estimators have asymptotic variances which depend on the unknown memory parameter d0,

(1 + 2d0)2/(16d20m) for dMGSE and π2(1 + 2d0)2/(96d2

0m) for dNLPE and dALPE . To get

feasible expressions we substitute the unknown d0 with the corresponding estimates. We

also use the finite sample hessian based approximations for the standard errors suggested by

Deo and Hurvich (2001), Hurvich and Ray (2003) and Sun and Phillips (2003). For dLPE ,

dGSE and dALPE these are

var(dLPE) =π2

24

m∑

j=1

(log λj − 1

m

m∑

k=1

log λk

)2−1

var(dGSE) =

4

m∑

j=1

(log λj − 1

m

m∑

k=1

log λk

)2−1

var(dALPE) = SEJ + (SEH − SEJ)I(H(dALPE , βALPE) > 0)

SEH =π2

6H22(dALPE , βALPE)

H11(dALPE , βALPE)H22(dALPE , βALPE)−H12(dALPE , βALPE)2

SEJ =π2

6Jn,22(dALPE , βALPE)

Jn,11(dALPE , βALPE)Jn,22(dALPE , βALPE)− Jn,12(dALPE , βALPE)2

where I(H(dALPE , βALPE) > 0) = 1 if H(dALPE , βALPE) is positive definite and 0 otherwise

and Jn(d, β) is defined in the proof of Theorem 3. var(dNLPE) is similarly obtained as

defined in formulae (60) and (61) in Sun and Phillips (2003). We have also tried only SEJ

and while this approach performs significantly worse in the NLPE it renders slightly worse

ALPE confidence intervals for low m and n and similar for large values of the bandwidth

and sample size. var(dMGSE) is defined in formula (16) in Hurvich and Ray (2003)1 with

the unknowns substituted with the corresponding estimates.

TABLES 3, 4 AND 5 ABOUT HERE

Tables 3, 4 and 5 display the coverage frequencies, mean and median lengths of the

90% Gaussian based confidence intervals on d0 = 0.2, 0.45 and 0.8 respectively, constructed

using the asymptotic variances with estimated d0 (Prob.A, Mean.A and Med.A) and the

finite sample hessian approximation (Prob.H, Mean.H and Med.H). The following comment

deserve particular attention:

• The coverage frequencies of the LPE and GSE are satisfactory only for a low bandwidth

but as m increases they go rapidly towards zero. Here mean and median lengths are1Note that b−1

1,0 in formula (16) of Hurvich and Ray (2003) corresponds to β0 in our notation.

15

equal because the approximations used for the standard errors do not depend on

estimates and do not vary across simulations. The finite sample approximation of the

standard error tends to give wider intervals and better (closer to the nominal 90%)

coverage frequencies.

• The NLPE has close to nominal coverage frequencies for d0 = 0.2 but as d0, n and m

increase the frequencies go down, being close to zero in several situations (d0 = 0.45,

m = n0.8, n = 4096, 8192, and d0 = 0.8, m = n0.8 for all n) . For d0 = 0.2 the

finite sample approximation of the standard error tends to give narrower intervals and

better coverage than the feasible asymptotic expression. However as d0 increases the

situation changes and for d0 = 0.8 the asymptotic expression gives in many cases

better coverage even with narrower intervals.

• For d0 = 0.2 the performance of the confidence intervals based on ALPE and MGSE

is quite poor with very wide intervals and with mean lengths much higher than the

median, especially for low m and n. This fact was also noted by Hurvich and Ray

(2003) and explained by the existence of outlying estimates of d0. The intervals based

on the finite sample approximation of the standard errors can be extremely wide,

especially with a large nsr, due to large variations in the estimated nsr that require

larger sample sizes and bandwidths to be accurately estimated. For higher values of

d0 and large n the ALPE and MGSE confidence intervals behave significantly better

when the finite sample approximation of the standard error is used. Overall the MGSE

confidence intervals tend to perform better than the intervals based on ALPE.

• Comparing the different estimators there is not one that outperforms the others in

every situation and the best choice depends on n, m, d0 and the nsr. Overall the

NLPE seems a good choice for low d0 and n but for values of d0 close to the stationary

limit or higher and a large sample size the MGSE (and the ALPE) with the finite

sample approximated standard error is a wiser choice.

7 LONG MEMORY IN IBEX35 VOLATILITY

Many empirical papers have recently exposed evidence of long memory in the volatility of

financial time series such as asset returns. In this section we analyze the persistence of the

16

volatility of a series of returns of the Spanish stock index Ibex35 composed of the 35 more

actively traded stocks. The series covers the period 1-10-93 to 22-3-96 half-hourly. The

returns are constructed by first differencing the logarithm of the transaction prices of the

last transaction every 30 minutes, omitting incomplete days. After this modification we get

the series of intra-day returns xt, t = 1, ..., 7260. We use as the proxy of the volatility the

series yt = log(xt − x)2 which corresponds to the volatility component in a LMSV model

apart from an added noise. Arteche (2004) found evidence of long memory in yt by means

of the GSE and observed that the estimates decreased rapidly with the bandwidth which

could be explained by the increasing negative bias of the GSE found in LMSV models.

Figure 1 shows the LPE, GSE, NLPE, MGSE and ALPE for a grid of bandwidths

m = 25, ..., 300 together with the 95% confidence intervals obtained using both the feasible

asymptotic expression and the finite sample approximations of the standard errors described

in Section 6. We do not consider higher values of m to avoid distorting influence of seasonal-

ity. To elude the phenomenon encountered in the Monte Carlo of excessively wide intervals

we restrict the values of the standard errors to be lower than an arbitrary value of 0.6 such

that if it exceeds that value we take the standard error calculated with a bandwidth in-

creased by one. This situation only occurs with dNLPE for m = 29 when the approximated

standard error is 3.03. Both approximations of the standard errors provide similar intervals

for the LPE and GSE and for most of the bandwidths also for the NLPE. Only very low

values of m lead to significant different intervals. The situation is different for the MGSE

and ALPE where the finite sample approximations always give wider intervals, especially

for low values of m.

It is also observable that the LPE and GSE decrease with m faster than the other

estimates. This situation is more clearly displayed in Figure 2 which shows the five estimates

for a grid of bandwidth m = 25, ..., 200. The LPE and GSE behave similarly with a rapid

decrease with m. This can be due to a large negative bias caused by some unaccounted

for added noise. In this situations a sensible strategy is to estimate d by techniques that

account for the added noise such as the NLPE, ALPE or MGSE because the large bias of

the LPE and GSE can render these estimates meaningless. The NLPE remains high for

a wider range of values of m but finally decreases for lower values of m than the MGSE

and ALPE which behave quite similarly. This is consistent with the asymptotic and finite

17

sample results described in the previous sections.

Finally Figure 3 shows estimates and confidence intervals for m = 150, ..., 300. The

GSE and LPE give strong support in favour of the stationarity of the volatility. However

the NLPE, ALPE and MGSE cast some doubt about it, at least with a 95% confidence.

Taking into account the results described in the previous sections, we should be cautious in

concluding in favour of the stationarity of the volatility of this series of Ibex35 returns.

FIGURES 1, 2 AND 3 ABOUT HERE

8 CONCLUSION

The strong persistence of the volatility in many financial and economic time series and

the use of LMSV models to capture such a behaviour has motivated a recent interest in

the estimation of the memory parameter in perturbed long memory series. The added

noise gives rise to a negative bias in traditional estimators based on a local specification

of the spectral density which can be reduced by including explicitly the added noise in

the estimation procedure as the NLPE and MGSE. We have proposed an additional log

periodogram regression based estimator, the ALPE, whose properties are close to those of

the MGSE, which seems the better option in a wide range of possibilities. In particular both

show a significant improvement in terms of bias but at the cost of a larger finite sample

variance than the NLPE for low values of d, bandwidth and sample size. However, for large

sample sizes and high values of d the ALPE and MGSE perform significantly better than

the NLPE, especially if the nsr is large as is often the case in financial time series.

A APPENDIX: TECHNICAL DETAILS

Proof of Theorem 1: The proof is similar to that of Theorem 1 in Hurvich and Beltrao

(1993) (see also Theorem 1 in Arteche and Velasco (2005)). Write

Ln(j) =∫ n

−ngnj(λ)dλ (A.1)

where

gnj(λ) = Kn(λj − λ)fz(λ)

Cλ−2d0j

, Kn(λ) =1

2πn|

n∑

t=1

eitλ|2 =sin2(λ

2n)

2πn sin2 λ2

18

and the Fejer´s kernel satisfies

Kn(λ) ≤ constant×min(n, n−1λ−2) (A.2)

From (A.2) the integral in (A.1) over [−π,−n−δ]⋃

[nδ, π] for some δ ∈ (0, 0.5) is

O(n−1|λj − n−δ|−2λ2d0

∫ π

−πfz(λ)dλ) = O(n−1n2δn−2d0) = o(n−2d0).

The integral over (−n−δ, n−δ) is A1n(j) + A2n(j) where

A1n(j) =∫ n1−δ

−n1−δ

sin2(

2πj−λ2

)

2πn2 sin2(

2πj−λ2n

) fy

(λn

)

Cλ−2d0j

A2n(j) =∫ n1−δ

−n1−δ

sin2(

2πj−λ2

)

2πn2 sin2(

2πj−λ2n

) fu

(λn

)

Cλ−2d0j

and the theorem is proved letting n go to ∞. 2

Proof of Theorem 3: The theorem is proved as in Sun and Phillips (2003) noting that

x1j(d, β) = 2

(1− βλ2d

j

1 + βλ2dj

)log λj = 2 log λj(1− βλ2d

j ) + O(λ4dj log λj) (A.3)

x2j(d, β) = − λ2dj

1 + βλ2dj

= −λ2dj + O(λ4d

j ) (A.4)

for (d, β) ∈ ∆ × Θ. This approximation leads to two main differences in the proof of

the asymptotic normality. Noting the consistency of dALPE the first one is related to the

convergence of the Hessian matrix in Lemma 5 of Sun and Phillips (2003), in particular the

proof of part a),

sup(d,β)∈Θn

||D−1n (H(d, β)− Jn(d, β))D−1

n || = op(1) (A.5)

where Θn = (d, β) : |λ−d0m (d − d0)| < ε and |β − β0| < ε for ε > 0 arbitrary small and

Jn,ab(d, β) =∑m

j=1 x∗ajx∗bj , a, b = 1, 2. The proof that the (1,1), (1,2) and (2,1) elements of

the left hand side are o(1) is as in Sun and Phillips (2003) noting (A.3) and (A.4). However

the (2,2) element is not zero but

λ−4dm

m

m∑

j=1

Wjλ4dj

(1 + βλ2dj )2

=1m

m∑

j=1

a∗j (d, β)W1j(d, β)

19

where

aj(d, β) =(j/m)4d

(1 + βλ2dj )2

W1j(d, β) = Vj(d, β) + εj + Uzj

Vj(d, β) = 2(d− d0) log λj + log(1 + β0λ2d0j )− log(1 + βλ2d

j )

εj =λα

j G

1 + β0λ2d0j

+ O(λα+ιj ).

Now

|aj(d, β)| = O

([j

m

]4d)

j = 1, 2, ..., m,

and |aj(d, β)− aj−1(d, β)| is bounded by∣∣∣∣∣

(j/m)4d

(1 + βλ2dj )2

− ([j − 1]/m)4d

(1 + βλ2dj )2

∣∣∣∣∣ +

∣∣∣∣∣([j − 1]/m)4d

(1 + βλ2dj )2

− ([j − 1]/m)4d

(1 + βλ2dj−1)2

∣∣∣∣∣

=

∣∣∣∣∣(

j

m

)4d 1(1 + βλ2d

j )2

[1−

(j − 1

j

)4d]∣∣∣∣∣ +

∣∣∣∣∣(

j − 1m

)4d β2(λ4dj−1 − λ4d

j ) + 2β(λ2dj−1 − λ2d

j )

(1 + βλ2dj )2(1 + βλ2d

j−1)2

∣∣∣∣∣

= O

(j4d−1

m4d

)

since λaj−1 − λa

j = O(j−1λaj ) for a 6= 0. By lemma 3 in Sun and Phillips (2003)

sup(d,β)∈Θn

∣∣∣∣∣∣1m

m∑

j=1

a∗j (d, β)Uzj

∣∣∣∣∣∣= Op

(1√m

)= op(1)

Also sup(d,β)∈Θn

∣∣∣m−1∑m

j=1 a∗j (d, β)Vj(d, β)∣∣∣ is bounded by

sup(d,β)∈Θn

∣∣∣∣∣∣1m

m∑

j=1

a∗j (d, β)2(d− d0) log λj

∣∣∣∣∣∣+ sup

(d,β)∈Θn

∣∣∣∣∣∣1m

m∑

j=1

a∗j (d, β) log

(1 + β0λ

2d0j

1 + βλ2dj

)∣∣∣∣∣∣

= O

(log λm sup

(d,β)∈Θn

|d− d0|)

+ O

(sup

(d,β)∈Θn

λ2dm

)= o(1)

since aj = O(1), and similarly

sup(d,β)∈Θn

∣∣∣∣∣∣1m

m∑

j=1

a∗j (d, β)εj

∣∣∣∣∣∣= O(λα

m) = o(1)

and (A.5) holds. With this result the convergence of sup(d,β)∈Θn|D−1

n H(d, β)D−1n | to Ω

follows as in Sun and Phillips (2003) noting (A.3) and (A.4).

The second difference with the NLPE lies on the bias term. Consider

D−1n S(d0, β0) =

1√m

m∑

j=1

Bj(Uzj + εj)

20

where Bj = (x∗1j(d0, β0) , λ−2d0m x∗2j(d0, βo))′. The asymptotic bias comes from m−1/2

∑Bjεj

such that

1√m

m∑

j=1

x∗1j(d0, β0)εj =2√m

m∑

j=1

[(1− β0λ

2d0j ) log λj

]∗(Gλα

j

1 + β0λ2d0j

+ O(λα+ι

j

))

+ O(√

mλ4d0+αm log λm)

=2√m

m∑

j=1

log∗ λj

(Gλα

j + O(λ2d0+αj )

)+ O(

√mλα+ι

m log λm)

+ O(√

mλ2d0+αm log λm) + O(

√mλ4d0+α

m log λm)

=2G√m

m∑

j=1

(log j − 1

m

k

log k

)λα

j + o(√

mλαm

)

=2Gα

(1 + α)2√

mλαm(1 + o(1))

λ−2d0m√m

m∑

j=1

x∗2j(d0, β0)εj = −λ−2d0m G√

m

m∑

j=1

(λ2d0

j − 1m

k

λ2d0k

)λα

j

1 + β0λ2d0j

+ O(√

mλα+ιm

)

= − 2d0αG

(2d0 + α + 1)(2d0 + 1)(1 + α)λα

m

√m(1 + o(1))

Then as n →∞

D−1n S(d0, β0) + bn =

1√m

m∑

j=1

BjUzj + o(1) d→ N

(0,

π2

)

as in (A.34)-(A.37) in Sun and Phillips (2003) with minor modifications to adapt their proofs

to our assumption B.1-B.3. Since the rest of the proof relies heavily on Sun and Phillips

(2003) and Robinson (1995a) we omit the details. The proof when var(ut) = 0 follows as in

Theorem 4 in Sun and Phillips (2003). 2

References

[1] Andrews, D.W.K., Guggenberger, P., 2003. A bias-reduced log-periodogram regression

estimator for the long-memory parameter. Econometrica 71, 675-712.

[2] Arteche, J., 2004, Gaussian Semiparametric Estimation in Long Memory in Stochastic

Volatility and Signal Plus Noise Models. J. Econometrics 119, 131-154.

[3] Arteche, J., Robinson, P.M., 1999. Seasonal and cyclical long memory. In: Ghosh, S.

(Ed.), Asymptotics, Nonparametrics and Time Series, New York: Marcel Dekker, Inc.,

115-148.

21

[4] Arteche, J., Velasco, C., 2005. Trimming and tapering semiparametric estimates in

asymmetric long memory time series. Journal of Time Series Analysis, forthcoming.

[5] Crato, N., Ray, B.K., 2002. Semi-parametric smoothing estimators for long-memory

processes with added noise. J. Statist. Plann. Inference 105, 283-297.

[6] Breidt, F.J., Crato, N., de Lima, P., 1998. The Detection and Estimation of Long

Memory in Stochastic Volatility. J. Econometrics 83, 325-348.

[7] Deo, R.S., Hurvich, C.M., 2001. On the log periodogram regression estimator of the

memory parameter in long memory stochastic volatility models. Econometric Theory

17, 686-710.

[8] Geweke, J., Porter-Hudak, S., 1983. The estimation and application of long-memory

time series models. J. Time Ser. Anal. 4, 221-238.

[9] Harvey, A.C., 1998. Long memory in stochastic volatitility. In: Knight, J., Satchell,

S. (Eds.), Forecasting Volatility in Financial Markets, Oxford: Butterworth-Haineman,

307-320.

[10] Henry, M., Robinson, P.M., 1996. Bandwidth choice in Gaussian semiparametric esti-

mation of long range dependence. In: Robinson, P.M., Rosenblatt, M. (Eds.), Athens

Conference on Applied Probability and Time Series, Vol. II. Lecture Notes in Statistics

115, New York: Springer-Verlag, 220-232.

[11] Hurvich, C.M., Beltrao, K.I., 1993. Asymptotics for the low-frequency ordinates of the

periodogram of long-memory time series. J. Time Ser. Anal. 14, 455-472.

[12] Hurvich, C.M., Deo, R., Brodsky, J., 1998. The mean squared error of Geweke and

Porter-Hudak’s estimator of the memory parameter in a long-memory time series. J.

Time Ser. Anal. 19, 19-46.

[13] Hurvich, C.M., Moulines, E., Soulier, P., 2005. Estimating long memory in volatility.

Econometrica, forthcoming.

[14] Hurvich, C.M., Ray, B.K., 2003, The Local Whittle Estimator of Long-Memory Sto-

chastic Volatility. Journal of Financial Econometrics 1, 445-470.

22

[15] Perez, A., Ruiz, E., 2001. Finite Sample Properties of a QML Estimator of Stochastic

Volatility Models with Long Memory. Econ. Lett. 70, 157-164.

[16] Robinson, P.M., 1995a. Log-periodogram regression of time series with long-range de-

pendence. Ann. Statist. 23, 1048-1072.

[17] Robinson, P.M., 1995b. Gaussian semiparametric estimation of long-range dependence.

Ann. Statist. 23, 1630-1661.

[18] Robinson, P.M., Henry, M., 2003. Higher order kernel semiparametric M-estimation of

long memory. J. Econometrics 114, 1-27.

[19] Sun, Y., Phillips, P.C.B., 2003. Nonlinear log-periodogram regression for perturbed

fractional processes. J. Econometrics 115, 355-389.

[20] Velasco, C., 2000. Non Gaussian log-periodogram regression. Econometric Theory 16,

44-79.

23

Table 1: “Optimal” bandwidthsσ2

w = 0.5 σ2w = 0.1

n d0 LPE GSE NLPE ALPE MGSE LPE GSE NLPE ALPE MGSE0.2 6 5 12 511 511 5 5 5 511 511

1024 0.45 13 11 29 511 502 5 5 7 511 5020.8 27 24 53 511 511 12 11 22 511 5110.2 12 9 29 1895 1715 5 5 5 1895 1715

4096 0.45 32 27 87 1681 1522 10 8 21 1681 15220.8 79 70 177 2047 1936 36 32 74 2047 19360.2 16 12 45 3299 2987 5 5 5 3299 2987

8192 0.45 51 42 149 2927 2650 16 13 36 2927 26500.8 134 119 323 3723 3370 62 55 135 3723 3370

24

Tab

le2:

Bia

san

dM

SEm

=n0

.4m

=n0

.6m

=n0

.8m

=m

op

te

st

nσ2 w

LPE

GSE

NLPE

ALPE

MG

SE

LPE

GSE

NLPE

ALPE

MG

SE

LPE

GSE

NLPE

ALPE

MG

SE

LPE

GSE

NLPE

ALPE

MG

SE

d0

=0.2

1024

0.5

Bia

s-0

.075

-0.1

01

0.1

10

0.1

11

0.0

42

-0.1

32

-0.1

44

-0.0

13

0.0

78

0.0

37

-0.1

63

-0.1

67

-0.0

80

0.0

49

0.0

28

-0.0

05

-0.0

15

0.1

49

0.0

43

0.0

22

MSE

0.0

25

0.0

23

0.0

73

0.1

28

0.0

97

0.0

22

0.0

23

0.0

19

0.1

09

0.0

92

0.0

27

0.0

28

0.0

14

0.0

91

0.0

81

0.0

57

0.0

58

0.1

02

0.0

79

0.0

67

0.1

Bia

s-0

.103

-0.1

27

0.0

90

0.0

54

-0.0

01

-0.1

52

-0.1

64

-0.0

41

-0.0

04

-0.0

37

-0.1

75

-0.1

80

-0.1

02

-0.0

32

-0.0

58

0.0

18

-0.0

29

0.2

88

-0.0

32

-0.0

61

MSE

0.0

25

0.0

25

0.0

71

0.1

10

0.0

83

0.0

26

0.0

28

0.0

20

0.0

82

0.0

72

0.0

31

0.0

33

0.0

17

0.0

70

0.0

64

0.0

75

0.0

54

0.2

28

0.0

70

0.0

63

4096

0.5

Bia

s-0

.093

-0.1

06

0.0

76

0.1

05

0.0

53

-0.1

37

-0.1

43

-0.0

47

0.0

69

0.0

33

-0.1

62

-0.1

64

-0.0

93

0.0

55

0.0

33

-0.0

46

-0.0

62

0.0

62

0.0

31

0.0

16

MSE

0.0

19

0.0

19

0.0

50

0.0

93

0.0

69

0.0

21

0.0

22

0.0

13

0.0

72

0.0

58

0.0

27

0.0

27

0.0

13

0.0

57

0.0

44

0.0

31

0.0

32

0.0

43

0.0

41

0.0

30

0.1

Bia

s-0

.125

-0.1

42

0.0

39

0.0

16

-0.0

27

-0.1

65

-0.1

70

-0.0

76

-0.0

17

-0.0

41

-0.1

82

-0.1

84

-0.1

25

-0.0

23

-0.0

36

0.0

15

-0.0

26

0.1

99

-0.0

54

-0.0

69

MSE

0.0

23

0.0

25

0.0

42

0.0

69

0.0

57

0.0

29

0.0

30

0.0

16

0.0

62

0.0

52

0.0

33

0.0

34

0.0

19

0.0

55

0.0

51

0.0

76

0.0

59

0.1

50

0.0

47

0.0

43

8192

0.5

Bia

s-0

.091

-0.1

05

0.0

60

0.0

98

0.0

56

-0.1

36

-0.1

39

-0.0

55

0.0

66

0.0

37

-0.1

62

-0.1

62

-0.0

92

0.0

37

0.0

06

-0.0

57

-0.0

64

0.0

33

0.0

22

-0.0

01

MSE

0.0

17

0.0

17

0.0

37

0.0

75

0.0

59

0.0

20

0.0

20

0.0

12

0.0

60

0.0

45

0.0

26

0.0

26

0.0

12

0.0

34

0.0

22

0.0

24

0.0

26

0.0

27

0.0

25

0.0

15

0.1

Bia

s-0

.132

-0.1

46

0.0

09

0.0

03

-0.0

34

-0.1

69

-0.1

73

-0.0

90

-0.0

12

-0.0

45

-0.1

84

-0.1

85

-0.1

38

-0.0

45

-0.0

50

0.0

26

-0.0

14

0.1

83

-0.0

43

-0.0

41

MSE

0.0

23

0.0

25

0.0

29

0.0

57

0.0

47

0.0

29

0.0

31

0.0

16

0.0

55

0.0

46

0.0

34

0.0

34

0.0

22

0.0

45

0.0

42

0.0

79

0.0

63

0.1

31

0.0

41

0.0

37

d0

=0.4

5

1024

0.5

Bia

s-0

.107

-0.1

32

0.0

81

0.1

62

0.0

91

-0.2

22

-0.2

25

-0.0

65

0.1

11

0.0

42

-0.3

21

-0.3

15

-0.1

73

0.0

62

0.0

12

-0.1

05

-0.1

23

0.0

00

0.0

49

0.0

19

MSE

0.0

49

0.0

47

0.0

75

0.1

27

0.1

03

0.0

57

0.0

56

0.0

24

0.0

95

0.0

72

0.1

05

0.1

00

0.0

39

0.0

62

0.0

42

0.0

56

0.0

59

0.0

41

0.0

48

0.0

30

0.1

Bia

s-0

.247

-0.2

76

-0.0

31

0.0

33

-0.0

45

-0.3

44

-0.3

54

-0.1

92

-0.0

07

-0.0

45

-0.4

00

-0.4

01

-0.2

86

-0.0

39

-0.0

61

-0.0

96

-0.1

41

0.1

22

-0.0

28

-0.0

32

MSE

0.0

88

0.0

96

0.0

71

0.1

31

0.1

18

0.1

24

0.1

29

0.0

60

0.1

29

0.1

19

0.1

61

0.1

62

0.0

90

0.1

17

0.1

07

0.1

21

0.1

14

0.1

40

0.1

16

0.1

03

4096

0.5

Bia

s-0

.054

-0.0

73

0.0

81

0.1

30

0.0

71

-0.1

65

-0.1

66

-0.0

42

0.0

45

0.0

08

-0.2

94

-0.2

81

-0.1

60

0.0

16

0.0

05

-0.0

69

-0.0

76

-0.0

11

0.0

17

0.0

09

MSE

0.0

25

0.0

21

0.0

49

0.0

69

0.0

51

0.0

31

0.0

30

0.0

12

0.0

31

0.0

21

0.0

87

0.0

79

0.0

34

0.0

12

0.0

07

0.0

22

0.0

20

0.0

14

0.0

08

0.0

05

0.1

Bia

s-0

.181

-0.1

98

0.0

02

0.0

70

0.0

17

-0.3

10

-0.3

08

-0.1

57

0.0

51

0.0

09

-0.3

92

-0.3

85

-0.2

58

0.0

33

0.0

07

-0.0

94

-0.1

22

0.0

23

0.0

28

0.0

07

MSE

0.0

51

0.0

53

0.0

45

0.0

72

0.0

63

0.1

00

0.0

97

0.0

37

0.0

63

0.0

51

0.1

54

0.1

48

0.0

70

0.0

48

0.0

33

0.0

74

0.0

79

0.0

59

0.0

40

0.0

27

8192

0.5

Bia

s-0

.039

-0.0

50

0.0

87

0.1

27

0.0

78

-0.1

41

-0.1

40

-0.0

33

0.0

23

0.0

03

-0.2

82

-0.2

67

-0.1

27

0.0

12

0.0

04

-0.0

53

-0.0

55

-0.0

12

0.0

10

0.0

05

MSE

0.0

18

0.0

13

0.0

38

0.0

53

0.0

37

0.0

22

0.0

21

0.0

08

0.0

17

0.0

11

0.0

80

0.0

72

0.0

18

0.0

06

0.0

03

0.0

12

0.0

11

0.0

08

0.0

04

0.0

02

0.1

Bia

s-0

.154

-0.1

61

-0.0

01

0.0

58

0.0

19

-0.2

84

-0.2

80

-0.1

33

0.0

28

0.0

02

-0.3

83

-0.3

72

-0.2

40

0.0

32

0.0

10

-0.0

85

-0.0

97

0.0

08

0.0

27

0.0

10

MSE

0.0

39

0.0

37

0.0

35

0.0

51

0.0

43

0.0

83

0.0

80

0.0

26

0.0

36

0.0

28

0.1

47

0.1

39

0.0

59

0.0

21

0.0

13

0.0

47

0.0

48

0.0

33

0.0

18

0.0

11

d0

=0.8

1024

0.5

Bia

s-0

.019

-0.0

31

0.0

57

0.0

70

0.0

43

-0.1

41

-0.1

48

-0.0

07

0.0

32

0.0

07

-0.4

10

-0.3

72

-0.1

74

0.0

41

0.0

24

-0.0

37

-0.0

39

0.0

10

0.0

39

0.0

26

MSE

0.0

35

0.0

28

0.0

37

0.0

38

0.0

33

0.0

30

0.0

29

0.0

17

0.0

23

0.0

19

0.1

71

0.1

41

0.0

35

0.0

16

0.0

11

0.0

24

0.0

18

0.0

18

0.0

14

0.0

10

0.1

Bia

s-0

.110

-0.1

38

0.0

27

0.0

50

0.0

01

-0.3

44

-0.3

39

-0.1

24

0.0

21

-0.0

09

-0.5

89

-0.5

41

-0.3

56

0.0

26

0.0

09

-0.0

82

-0.0

95

0.0

01

0.0

32

0.0

14

MSE

0.0

55

0.0

53

0.0

42

0.0

45

0.0

43

0.1

29

0.1

23

0.0

35

0.0

35

0.0

33

0.3

49

0.2

96

0.1

37

0.0

26

0.0

20

0.0

57

0.0

53

0.0

35

0.0

25

0.0

20

4096

0.5

Bia

s0.0

16

0.0

07

0.0

74

0.0

83

0.0

62

-0.0

60

-0.0

65

0.0

23

0.0

41

0.0

21

-0.3

52

-0.3

07

-0.1

32

0.0

30

0.0

21

-0.0

10

-0.0

12

0.0

11

0.0

28

0.0

21

MSE

0.0

21

0.0

15

0.0

26

0.0

27

0.0

21

0.0

08

0.0

07

0.0

08

0.0

11

0.0

07

0.1

25

0.0

96

0.0

19

0.0

05

0.0

03

0.0

08

0.0

06

0.0

07

0.0

04

0.0

03

0.1

Bia

s-0

.015

-0.0

32

0.0

55

0.0

63

0.0

36

-0.2

16

-0.2

10

-0.0

44

0.0

35

0.0

21

-0.5

38

-0.4

64

-0.2

94

0.0

32

0.0

19

-0.0

29

-0.0

35

0.0

07

0.0

31

0.0

24

MSE

0.0

22

0.0

16

0.0

26

0.0

28

0.0

21

0.0

52

0.0

48

0.0

10

0.0

15

0.0

11

0.2

90

0.2

17

0.0

88

0.0

10

0.0

06

0.0

18

0.0

14

0.0

14

0.0

08

0.0

06

8192

0.5

Bia

s0.0

25

0.0

14

0.0

78

0.0

83

0.0

62

-0.0

36

-0.0

39

0.0

25

0.0

35

0.0

22

-0.3

21

-0.2

79

-0.1

12

0.0

25

0.0

20

-0.0

06

-0.0

08

0.0

12

0.0

25

0.0

19

MSE

0.0

14

0.0

09

0.0

20

0.0

21

0.0

15

0.0

04

0.0

03

0.0

06

0.0

07

0.0

05

0.1

04

0.0

79

0.0

14

0.0

03

0.0

02

0.0

04

0.0

03

0.0

04

0.0

03

0.0

02

0.1

Bia

s0.0

12

-0.0

03

0.0

71

0.0

75

0.0

51

-0.1

64

-0.1

59

-0.0

20

0.0

31

0.0

21

-0.5

14

-0.4

33

-0.2

71

0.0

27

0.0

19

-0.0

12

-0.0

15

0.0

16

0.0

29

0.0

22

MSE

0.0

15

0.0

11

0.0

21

0.0

21

0.0

16

0.0

30

0.0

28

0.0

06

0.0

10

0.0

06

0.2

65

0.1

88

0.0

75

0.0

06

0.0

03

0.0

10

0.0

08

0.0

09

0.0

05

0.0

03

25

Tab

le3:

90%

Con

fiden

ceIn

terv

als

(d0

=0.

2)m

=n0

.4m

=n0

.6m

=n0

.8m

=m

op

te

st

σ2 w

LPE

GSE

NLPE

ALPE

MG

SE

LPE

GSE

NLPE

ALPE

MG

SE

LPE

GSE

NLPE

ALPE

MG

SE

LPE

GSE

NLPE

ALPE

MG

SE

n=

1024

0.5

Pro

b.A

0.9

77

0.9

82

0.9

06

0.7

64

0.7

98

0.3

96

0.2

08

0.9

83

0.7

39

0.7

68

0.0

02

0.0

00

0.9

98

0.7

09

0.7

28

0.9

30

0.9

06

0.8

82

0.6

73

0.6

93

Mean.A

0.5

27

0.4

11

6.5

08

11.1

39.7

69

0.2

64

0.2

06

3.2

25

5.0

47

4.0

48

0.1

32

0.1

03

1.7

13

2.4

12

1.8

85

0.8

61

0.7

36

7.3

34

1.6

31

1.2

41

Med.A

0.5

27

0.4

11

1.4

51

2.1

68

3.0

50

0.2

64

0.2

06

1.0

13

1.2

83

1.4

63

0.1

32

0.1

03

0.7

48

0.8

98

0.8

88

0.8

61

0.7

36

1.5

92

0.5

13

0.4

70

Pro

b.H

0.9

94

0.9

93

0.9

24

0.9

85

1.0

00

0.4

76

0.2

54

0.9

47

0.9

64

1.0

00

0.0

03

0.0

00

0.8

86

0.9

66

0.9

98

0.9

86

0.9

80

0.9

04

0.9

74

0.9

99

Mean.H

0.6

90

0.5

38

1.8

78

82783.5

63447.0

0.2

94

0.2

29

0.8

17

19971.3

14448.7

0.1

37

0.1

07

0.4

77

5791.7

3820.4

1.4

24

1.2

94

2.0

87

3093.3

1988.4

Med.H

0.6

90

0.5

38

1.3

19

6.0

82

11.5

60.2

94

0.2

29

0.6

45

3.1

99

4.4

33

0.1

37

0.1

07

0.3

79

2.1

26

2.3

60

1.4

24

1.2

94

1.5

76

1.5

82

1.7

37

0.1

Pro

b.A

0.9

88

0.9

93

0.9

02

0.8

00

0.8

31

0.2

87

0.0

84

0.9

88

0.8

24

0.8

29

0.0

02

0.0

00

1.0

00

0.8

01

0.8

32

0.9

15

0.9

09

0.7

72

0.7

85

0.8

17

Mean.A

0.5

27

0.4

11

7.3

22

13.4

98

11.5

30.2

64

0.2

06

4.1

44

6.9

46

5.8

40

0.1

32

0.1

03

2.4

55

3.7

23

3.1

83

0.9

44

0.7

36

9.9

05

2.7

24

2.2

94

Med.A

0.5

27

0.4

11

1.5

81

5.1

16

16.8

30.2

64

0.2

06

1.2

41

4.3

45

5.4

06

0.1

32

0.1

03

0.9

72

3.2

64

5.2

43

0.9

44

0.7

36

1.9

37

3.3

72

3.5

05

Pro

b.H

0.9

98

0.9

95

0.8

99

0.9

84

1.0

00

0.3

40

0.1

19

0.9

62

0.9

80

1.0

00

0.0

02

0.0

00

0.8

43

0.9

85

1.0

00

1.0

00

0.9

85

0.9

07

0.9

84

1.0

00

Mean.H

0.6

90

0.5

38

1.7

26

101888.4

77328.3

0.2

94

0.2

29

0.8

22

28820.7

23462.4

0.1

37

0.1

07

0.4

76

10282.9

9016.2

1.6

60

1.2

94

5.9

16

6794.8

5265.9

Med.H

0.6

90

0.5

38

1.3

06

14.9

01

51.8

70.2

94

0.2

29

0.6

75

8.3

38

13.8

05

0.1

37

0.1

07

0.3

83

4.8

99

10.4

56

1.6

60

1.2

94

3.8

41

4.4

88

5.0

50

n=

4096

0.5

Pro

b.A

0.9

91

0.6

01

0.9

00

0.6

96

0.7

42

0.1

57

0.0

34

0.9

90

0.6

86

0.7

18

0.0

00

0.0

00

1.0

00

0.6

29

0.6

42

0.9

51

0.9

50

0.9

30

0.6

41

0.6

43

Mean.A

0.4

06

0.3

17

4.5

54

7.1

60

5.5

50

0.1

74

0.1

36

1.9

54

2.3

50

1.8

05

0.0

76

0.0

59

0.8

03

0.8

15

0.5

41

0.6

09

0.5

48

4.3

80

0.4

98

0.3

59

Med.A

0.4

06

0.3

17

1.2

31

1.3

99

1.4

12

0.1

74

0.1

36

0.7

86

0.7

46

0.7

48

0.0

76

0.0

59

0.4

32

0.2

59

0.2

22

0.6

09

0.5

48

1.1

75

0.1

71

0.1

42

Pro

b.H

0.9

95

0.9

97

0.8

89

0.9

56

1.0

00

0.1

85

0.0

44

0.9

35

0.9

55

1.0

00

0.0

00

0.0

00

0.8

06

0.9

40

0.9

90

0.9

86

0.9

85

0.9

14

0.9

62

0.9

84

Mean.H

0.4

92

0.3

83

1.3

92

38375.2

25984.9

0.1

85

0.1

44

0.6

30

5768.3

3845.2

0.0

77

0.0

60

0.2

96

776.0

276.6

0.8

42

0.8

09

1.2

58

304.0

694.3

55

Med.H

0.4

92

0.3

83

1.0

08

3.3

96

4.9

59

0.1

85

0.1

44

0.4

74

1.7

51

1.9

70

0.0

77

0.0

60

0.2

47

1.0

37

0.9

17

0.8

42

0.8

09

0.9

82

0.7

63

0.6

57

0.1

Pro

b.A

0.9

96

0.4

05

0.9

33

0.8

02

0.8

18

0.0

37

0.0

07

0.9

98

0.7

78

0.8

01

0.0

00

0.0

00

1.0

00

0.7

33

0.7

53

0.9

08

0.9

03

0.9

69

0.7

70

0.7

87

Mean.A

0.4

06

0.3

17

5.8

78

10.1

18.6

40

0.1

74

0.1

36

2.8

47

4.4

03

3.5

15

0.0

76

0.0

59

1.4

32

1.9

66

1.5

62

0.9

44

0.7

36

10.3

11.3

89

1.1

79

Med.A

0.4

06

0.3

17

1.3

89

3.5

97

6.5

23

0.1

74

0.1

36

0.9

88

2.3

45

2.1

99

0.0

76

0.0

59

0.6

74

1.5

33

1.4

03

0.9

44

0.7

36

2.1

37

1.5

19

1.4

83

Pro

b.H

0.9

99

1.0

00

0.9

11

0.9

74

1.0

00

0.0

47

0.0

07

0.9

27

0.9

83

1.0

00

0.0

00

0.0

00

0.7

44

0.9

78

1.0

00

1.0

00

0.9

74

0.8

96

0.9

83

1.0

00

Mean.H

0.4

92

0.3

83

1.2

95

57661.7

45030.0

0.1

85

0.1

44

0.5

91

13555.5

11057.0

0.0

77

0.0

60

0.3

32

3942.6

2938.7

1.6

60

1.2

94

8.3

54

2416.8

1967.8

Med.H

0.4

92

0.3

83

0.9

93

8.6

28

19.7

40.1

85

0.1

44

0.4

66

3.9

77

4.9

13

0.0

77

0.0

60

0.2

67

2.3

60

2.7

31

1.6

60

1.2

94

4.0

52

2.2

11

2.5

57

n=

8192

0.5

Pro

b.A

0.6

94

0.5

60

0.9

39

0.7

05

0.7

25

0.0

62

0.0

05

0.9

95

0.6

66

0.6

98

0.0

00

0.0

00

1.0

00

0.6

16

0.6

76

0.9

67

0.9

53

0.9

49

0.6

14

0.5

08

Mean.A

0.3

52

0.2

74

3.7

21

5.4

25

4.2

26

0.1

42

0.1

10

1.4

25

1.5

30

1.0

23

0.0

57

0.0

45

0.4

95

0.4

26

0.3

06

0.5

27

0.4

75

2.9

87

0.2

71

0.1

98

Med.A

0.3

52

0.2

74

1.0

71

1.1

10

1.0

68

0.1

42

0.1

10

0.6

58

0.5

71

0.4

84

0.0

57

0.0

45

0.3

10

0.1

90

0.1

60

0.5

27

0.4

75

1.0

20

0.1

24

0.1

06

Pro

b.H

0.9

98

0.6

51

0.9

08

0.9

50

1.0

00

0.0

71

0.0

08

0.9

19

0.9

41

0.9

96

0.0

00

0.0

00

0.8

23

0.9

52

0.9

80

0.9

85

0.9

80

0.9

34

0.9

55

0.9

69

Mean.H

0.4

12

0.3

21

1.1

57

24013.3

16070.3

0.1

48

0.1

16

0.5

79

2629.3

1019.3

0.0

58

0.0

45

0.2

36

79.3

315.1

32

0.6

90

0.6

56

1.0

73

30.4

82

10.0

43

Med.H

0.4

12

0.3

21

0.8

87

2.4

74

3.6

87

0.1

48

0.1

16

0.4

06

1.2

71

1.2

62

0.0

58

0.0

45

0.1

94

0.7

16

0.5

75

0.6

90

0.6

56

0.8

34

0.5

24

0.4

36

0.1

Pro

b.A

0.5

25

0.3

31

0.9

70

0.8

04

0.8

25

0.0

07

0.0

00

0.9

99

0.7

40

0.7

86

0.0

00

0.0

00

1.0

00

0.7

48

0.7

50

0.8

99

0.8

94

0.9

65

0.7

17

0.6

52

Mean.A

0.3

52

0.2

74

5.0

83

8.3

60

6.9

06

0.1

42

0.1

10

2.3

32

3.3

28

2.7

39

0.0

57

0.0

45

1.2

67

1.5

34

1.1

77

0.9

44

0.7

36

9.5

59

0.9

70

0.7

36

Med.A

0.3

52

0.2

74

1.3

00

2.7

73

3.0

86

0.1

42

0.1

10

0.8

91

1.6

54

1.8

11

0.0

57

0.0

45

0.6

62

1.3

19

1.0

81

0.9

44

0.7

36

2.0

77

0.9

33

0.3

69

Pro

b.H

1.0

00

0.4

37

0.9

46

0.9

85

1.0

00

0.0

08

0.0

00

0.9

01

0.9

80

1.0

00

0.0

00

0.0

00

0.5

81

0.9

79

0.9

99

1.0

00

0.9

70

0.9

29

0.9

73

0.9

98

Mean.H

0.4

12

0.3

21

1.2

34

41115.5

31034.2

0.1

48

0.1

16

0.5

40

8644.6

6411.0

0.0

58

0.0

45

0.2

95

2580.1

1860.5

1.6

60

1.2

94

10.0

61362.8

925.6

7M

ed.H

0.4

12

0.3

21

0.8

75

5.9

70

8.6

63

0.1

48

0.1

16

0.4

15

2.8

43

3.7

01

0.0

58

0.0

45

0.2

33

2.1

24

2.0

07

1.6

60

1.2

94

4.4

24

1.5

86

1.5

36

Pro

b.A

,M

ean.A

,M

ed.A

denote

covera

ge

frequencie

s,m

ean

length

sand

media

nle

ngth

softh

enom

inal90%

confidence

inte

rvals

wit

hth

easy

mpto

tic

expre

ssio

nfo

rst

andard

err

ors

wit

hest

imate

dpara

mete

rs.

Pro

b.H

,M

ean.H

,M

ed.H

denote

covera

ge

frequencie

s,m

ean

length

sand

media

nle

ngth

softh

enom

inal90%

confidence

inte

rvals

wit

hth

efinit

esa

mple

Hess

ian

base

dappro

xim

ati

on

ofth

est

andard

err

ors

wit

hest

imate

dpara

mete

rs.

26

Tab

le4:

90%

Con

fiden

ceIn

terv

als

(d0

=0.

45)

m=

n0

.4m

=n0

.6m

=n0

.8m

=m

op

te

st

σ2 w

LPE

GSE

NLPE

ALPE

MG

SE

LPE

GSE

NLPE

ALPE

MG

SE

LPE

GSE

NLPE

ALPE

MG

SE

LPE

GSE

NLPE

ALPE

MG

SE

n=

1024

0.5

Pro

b.A

0.7

47

0.6

25

0.8

81

0.6

77

0.7

10

0.1

53

0.0

37

0.9

88

0.6

32

0.7

17

0.0

00

0.0

00

0.7

24

0.5

15

0.4

43

0.7

45

0.6

53

0.9

44

0.3

93

0.3

82

Mean.A

0.5

27

0.4

11

2.2

36

2.6

58

2.0

02

0.2

64

0.2

06

0.7

21

0.6

85

0.5

40

0.1

32

0.1

03

0.5

50

0.3

24

0.2

57

0.5

85

0.4

96

1.3

24

0.2

25

0.1

71

Med.A

0.5

27

0.4

11

1.0

22

0.9

40

0.8

08

0.2

64

0.2

06

0.5

98

0.5

10

0.4

42

0.1

32

0.1

03

0.3

58

0.2

66

0.2

17

0.5

85

0.4

96

0.8

28

0.1

90

0.1

53

Pro

b.H

0.8

64

0.7

76

0.9

22

0.9

95

1.0

00

0.1

92

0.0

55

0.9

43

0.9

84

1.0

00

0.0

00

0.0

00

0.4

15

0.9

77

0.9

39

0.8

82

0.8

07

0.9

23

0.9

56

0.9

34

Mean.H

0.6

90

0.5

38

1.8

24

12658.2

6947.8

0.2

94

0.2

29

0.6

37

342.4

873.8

13

0.1

37

0.1

07

0.3

97

26.7

69

21.0

98

0.7

96

0.6

98

1.1

39

17.5

77

0.5

63

Med.H

0.6

90

0.5

38

1.3

99

2.3

04

2.5

40

0.2

94

0.2

29

0.5

72

1.2

85

1.0

46

0.1

37

0.1

07

0.2

78

0.7

76

0.6

16

0.7

96

0.6

98

0.9

63

0.6

30

0.5

07

0.1

Pro

b.A

0.4

91

0.3

06

0.9

48

0.7

67

0.7

65

0.0

05

0.0

00

0.9

97

0.6

58

0.6

78

0.0

00

0.0

00

0.5

13

0.6

12

0.4

81

0.9

06

0.5

50

1.0

00

0.4

62

0.3

84

Mean.A

0.5

27

0.4

11

3.8

21

6.0

19

5.0

17

0.2

64

0.2

06

1.9

33

2.8

26

1.9

65

0.1

32

0.1

03

1.0

86

1.4

86

1.0

21

0.9

44

0.7

36

5.9

56

1.0

66

0.7

21

Med.A

0.5

27

0.4

11

1.1

74

1.1

25

1.0

80

0.2

64

0.2

06

0.7

80

0.6

38

0.5

49

0.1

32

0.1

03

0.5

11

0.3

09

0.2

54

0.9

44

0.7

36

1.4

58

0.2

17

0.1

65

Pro

b.H

0.6

47

0.4

40

0.9

56

0.9

89

1.0

00

0.0

10

0.0

00

0.8

74

0.9

90

1.0

00

0.0

00

0.0

00

0.0

90

0.9

85

0.9

99

1.0

00

1.0

00

0.9

99

0.9

86

0.9

85

Mean.H

0.6

90

0.5

38

1.8

35

39960.7

27831.6

0.2

94

0.2

29

0.7

35

9319.8

5207.6

0.1

37

0.1

07

0.4

24

3333.6

1786.2

1.6

60

1.2

94

3.4

57

2201.4

1267.4

Med.H

0.6

90

0.5

38

1.3

63

3.0

93

4.5

97

0.2

94

0.2

29

0.6

13

1.9

60

2.4

68

0.1

37

0.1

07

0.3

38

1.5

82

1.6

18

1.6

60

1.2

94

2.5

93

1.4

08

1.4

92

n=

4096

0.5

Pro

b.A

0.8

05

0.7

50

0.8

84

0.7

45

0.7

69

0.0

85

0.0

15

0.9

85

0.7

42

0.7

48

0.0

00

0.0

00

0.2

19

0.5

59

0.5

34

0.7

81

0.7

24

0.9

66

0.4

86

0.4

94

Mean.A

0.4

06

0.3

17

1.0

37

1.1

06

0.7

56

0.1

74

0.1

36

0.4

04

0.3

76

0.3

01

0.0

76

0.0

59

0.4

82

0.1

62

0.1

27

0.3

73

0.3

17

0.5

24

0.1

09

0.0

89

Med.A

0.4

06

0.3

17

0.7

93

0.7

55

0.6

34

0.1

74

0.1

36

0.3

84

0.3

51

0.2

87

0.0

76

0.0

59

0.1

97

0.1

57

0.1

24

0.3

73

0.3

17

0.4

87

0.1

07

0.0

88

Pro

b.H

0.8

86

0.8

22

0.8

99

0.9

27

0.9

97

0.1

10

0.0

21

0.9

56

0.9

60

0.9

59

0.0

00

0.0

00

0.1

10

0.9

05

0.8

96

0.8

64

0.8

13

0.9

44

0.9

03

0.8

93

Mean.H

0.4

92

0.3

83

1.3

17

1415.0

263.9

00.1

85

0.1

44

0.3

95

0.6

83

0.4

99

0.0

77

0.0

60

0.2

97

0.3

53

0.2

72

0.4

43

0.3

83

0.5

89

0.2

91

0.2

30

Med.H

0.4

92

0.3

83

1.0

72

1.4

15

1.2

90

0.1

85

0.1

44

0.3

64

0.6

20

0.4

84

0.0

77

0.0

60

0.1

48

0.3

45

0.2

69

0.4

43

0.3

83

0.5

09

0.2

83

0.2

27

0.1

Pro

b.A

0.5

64

0.3

73

0.9

45

0.8

11

0.7

56

0.0

00

0.0

00

1.0

00

0.6

06

0.5

58

0.0

00

0.0

00

0.0

28

0.3

24

0.2

85

0.7

18

0.6

31

0.9

18

0.2

52

0.2

34

Mean.A

0.4

06

0.3

17

1.5

28

1.8

90

1.3

09

0.1

74

0.1

36

0.6

70

0.5

89

0.3

99

0.0

76

0.0

59

0.3

16

0.2

34

0.1

49

0.6

67

0.5

82

1.8

48

0.1

45

0.1

02

Med.A

0.4

06

0.3

17

0.8

51

0.7

73

0.6

61

0.1

74

0.1

36

0.4

57

0.3

44

0.2

84

0.0

76

0.0

59

0.2

66

0.1

54

0.1

25

0.6

67

0.5

82

0.9

48

0.1

07

0.0

89

Pro

b.H

0.6

88

0.4

83

0.9

28

0.9

58

1.0

00

0.0

00

0.0

00

0.8

50

0.9

40

0.9

97

0.0

00

0.0

00

0.0

06

0.9

31

0.9

44

0.9

84

0.9

76

0.9

14

0.9

06

0.9

17

Mean.H

0.4

92

0.3

83

1.3

21

6247.8

2912.0

0.1

85

0.1

44

0.4

64

499.1

9146.5

30.0

77

0.0

60

0.2

05

79.4

95

0.6

19

0.9

59

0.8

84

1.5

08

13.2

49

0.5

35

Med.H

0.4

92

0.3

83

0.9

90

1.6

37

1.8

70

0.1

85

0.1

44

0.3

76

0.9

37

0.8

74

0.0

77

0.0

60

0.1

77

0.6

78

0.5

57

0.9

59

0.8

84

1.1

43

0.5

88

0.4

83

n=

8192

0.5

Pro

b.A

0.8

15

0.7

73

0.8

87

0.7

49

0.7

67

0.0

51

0.0

10

0.9

76

0.7

93

0.7

59

0.0

00

0.0

00

0.0

37

0.5

78

0.5

78

0.8

12

0.7

74

0.9

83

0.4

92

0.5

20

Mean.A

0.3

52

0.2

74

0.7

90

0.7

69

0.5

74

0.1

42

0.1

10

0.3

19

0.3

03

0.2

39

0.0

57

0.0

45

0.1

66

0.1

21

0.0

95

0.2

95

0.2

54

0.3

80

0.0

82

0.0

67

Med.A

0.3

52

0.2

74

0.6

74

0.6

50

0.5

46

0.1

42

0.1

10

0.3

11

0.2

94

0.2

33

0.0

57

0.0

45

0.1

45

0.1

19

0.0

94

0.2

95

0.2

54

0.3

68

0.0

81

0.0

67

Pro

b.H

0.8

83

0.8

32

0.8

93

0.9

18

0.9

91

0.0

64

0.0

11

0.9

58

0.9

50

0.9

39

0.0

00

0.0

00

0.0

11

0.9

03

0.8

94

0.8

72

0.8

43

0.9

68

0.9

10

0.9

06

Mean.H

0.4

12

0.3

21

1.0

32

105.4

41.0

65

0.1

48

0.1

16

0.3

10

0.4

78

0.3

53

0.0

58

0.0

45

0.1

19

0.2

41

0.1

87

0.3

35

0.2

93

0.4

16

0.1

99

0.1

59

Med.H

0.4

12

0.3

21

0.8

55

1.1

11

0.9

89

0.1

48

0.1

16

0.2

90

0.4

49

0.3

49

0.0

58

0.0

45

0.1

08

0.2

38

0.1

86

0.3

35

0.2

93

0.3

63

0.1

97

0.1

58

0.1

Pro

b.A

0.5

87

0.4

17

0.9

47

0.8

57

0.7

81

0.0

00

0.0

00

0.9

31

0.6

53

0.5

23

0.0

00

0.0

00

0.0

01

0.3

75

0.3

33

0.7

62

0.6

83

0.9

40

0.2

71

0.2

65

Mean.A

0.3

52

0.2

74

1.1

60

1.1

91

0.8

15

0.1

42

0.1

10

0.4

05

0.3

31

0.2

60

0.0

57

0.0

45

0.2

02

0.1

25

0.0

97

0.5

27

0.4

56

0.9

84

0.0

86

0.0

69

Med.A

0.3

52

0.2

74

0.7

38

0.6

80

0.5

80

0.1

42

0.1

10

0.3

61

0.2

90

0.2

31

0.0

57

0.0

45

0.1

92

0.1

17

0.0

94

0.5

27

0.4

56

0.7

34

0.0

80

0.0

67

Pro

b.H

0.6

68

0.5

06

0.9

33

0.9

56

1.0

00

0.0

00

0.0

00

0.7

10

0.9

43

0.9

45

0.0

00

0.0

00

0.0

00

0.9

32

0.9

20

0.8

66

0.8

22

0.9

32

0.9

19

0.8

99

Mean.H

0.4

12

0.3

21

1.1

19

1865.9

654.2

60.1

48

0.1

16

0.3

36

0.7

43

0.5

89

0.0

58

0.0

45

0.1

33

0.4

87

0.3

67

0.6

90

0.6

20

1.1

31

6.4

68

0.3

30

Med.H

0.4

12

0.3

21

0.8

56

1.3

04

1.2

76

0.1

48

0.1

16

0.2

92

0.6

82

0.5

59

0.0

58

0.0

45

0.1

26

0.4

43

0.3

51

0.6

90

0.6

20

0.8

34

0.3

96

0.3

17

Pro

b.A

,M

ean.A

,M

ed.A

denote

covera

ge

frequencie

s,m

ean

length

sand

media

nle

ngth

softh

enom

inal90%

confidence

inte

rvals

wit

hth

easy

mpto

tic

expre

ssio

nfo

rst

andard

err

ors

wit

hest

imate

dpara

mete

rs.

Pro

b.H

,M

ean.H

,M

ed.H

denote

covera

ge

frequencie

s,m

ean

length

sand

media

nle

ngth

softh

enom

inal90%

confidence

inte

rvals

wit

hth

efinit

esa

mple

Hess

ian

base

dappro

xim

ati

on

ofth

est

andard

err

ors

wit

hest

imate

dpara

mete

rs.

27

Tab

le5:

90%

Con

fiden

ceIn

terv

als

(d0

=0.

8)m

=n0

.4m

=n0

.6m

=n0

.8m

=m

op

te

st

σ2 w

LPE

GSE

NLPE

ALPE

MG

SE

LPE

GSE

NLPE

ALPE

MG

SE

LPE

GSE

NLPE

ALPE

MG

SE

LPE

GSE

NLPE

ALPE

MG

SE

n=

1024

0.5

Pro

b.A

0.8

91

0.8

59

1.0

00

1.0

00

1.0

00

0.4

48

0.2

94

0.9

00

0.7

59

0.7

31

0.0

00

0.0

00

0.1

82

0.5

47

0.5

43

0.8

63

0.7

68

0.9

79

0.4

36

0.4

44

Mean.A

0.5

27

0.4

11

0.8

95

0.8

90

0.6

76

0.2

64

0.2

06

0.4

35

0.4

28

0.3

37

0.1

32

0.1

03

0.2

39

0.2

12

0.1

66

0.4

06

0.3

36

0.4

75

0.1

50

0.1

17

Med.A

0.5

27

0.4

11

0.8

09

0.7

96

0.6

40

0.2

64

0.2

06

0.4

26

0.4

18

0.3

31

0.1

32

0.1

03

0.2

37

0.2

10

0.1

65

0.4

06

0.3

36

0.4

67

0.1

48

0.1

17

Pro

b.H

0.9

40

0.9

11

0.9

96

1.0

00

1.0

00

0.5

11

0.3

43

0.9

75

0.9

98

0.9

65

0.0

00

0.0

00

0.1

25

0.9

45

0.8

84

0.9

21

0.8

97

0.9

94

0.9

00

0.8

62

Mean.H

0.6

90

0.5

38

1.5

22

231.8

31.2

70

0.2

94

0.2

29

0.4

96

0.6

70

0.5

04

0.1

37

0.1

07

0.2

00

0.4

26

0.3

30

0.4

92

0.4

13

0.5

79

0.3

80

0.2

96

Med.H

0.6

90

0.5

38

1.2

89

1.5

03

1.1

84

0.2

94

0.2

29

0.4

64

0.6

39

0.4

99

0.1

37

0.1

07

0.1

98

0.4

15

0.3

28

0.4

92

0.4

13

0.5

32

0.3

70

0.2

92

0.1

Pro

b.A

0.7

87

0.6

57

1.0

00

1.0

00

1.0

00

0.0

23

0.0

06

0.8

29

0.5

93

0.5

69

0.0

00

0.0

00

0.0

24

0.4

14

0.3

93

0.8

41

0.7

83

1.0

00

0.2

76

0.2

89

Mean.A

0.5

27

0.4

11

0.9

71

1.0

12

0.7

27

0.2

64

0.2

06

0.4

70

0.4

38

0.3

45

0.1

32

0.1

03

0.4

30

0.2

16

0.1

69

0.6

09

0.4

96

0.7

76

0.1

52

0.1

19

Med.A

0.5

27

0.4

11

0.8

22

0.8

01

0.6

52

0.2

64

0.2

06

0.4

56

0.4

15

0.3

31

0.1

32

0.1

03

0.2

77

0.2

10

0.1

66

0.6

09

0.4

96

0.7

20

0.1

48

0.1

17

Pro

b.H

0.8

67

0.7

73

0.9

92

1.0

00

1.0

00

0.0

27

0.0

07

0.8

51

0.9

86

0.9

30

0.0

00

0.0

00

0.0

23

0.9

49

0.9

48

0.9

20

0.8

77

0.9

94

0.9

52

0.9

29

Mean.H

0.6

90

0.5

38

1.5

03

1152.4

3160.5

02

0.2

94

0.2

29

0.4

73

0.9

23

0.7

01

0.1

37

0.1

07

0.3

02

0.6

97

0.5

35

0.8

42

0.6

98

1.1

33

0.6

44

0.4

99

Med.H

0.6

90

0.5

38

1.2

78

1.6

49

1.3

54

0.2

94

0.2

29

0.4

46

0.8

45

0.6

78

0.1

37

0.1

07

0.2

10

0.6

48

0.5

17

0.8

42

0.6

98

0.9

74

0.6

05

0.4

84

n=

4096

0.5

Pro

b.A

0.9

23

0.7

78

1.0

00

1.0

00

0.9

91

0.6

36

0.4

99

0.8

65

0.7

63

0.8

09

0.0

00

0.0

00

0.0

62

0.5

92

0.5

94

0.8

23

0.8

24

0.8

62

0.4

36

0.4

40

Mean.A

0.4

06

0.3

17

0.6

47

0.6

44

0.5

05

0.1

74

0.1

36

0.2

81

0.2

79

0.2

19

0.0

76

0.0

59

0.1

33

0.1

22

0.0

95

0.2

37

0.1

97

0.2

57

0.0

75

0.0

60

Med.A

0.4

06

0.3

17

0.6

27

0.6

23

0.4

96

0.1

74

0.1

36

0.2

79

0.2

77

0.2

18

0.0

76

0.0

59

0.1

33

0.1

21

0.0

95

0.2

37

0.1

97

0.2

56

0.0

75

0.0

60

Pro

b.H

0.9

46

0.8

67

1.0

00

1.0

00

1.0

00

0.6

74

0.5

32

0.8

93

0.8

96

0.8

95

0.0

00

0.0

00

0.0

40

0.8

41

0.8

31

0.8

67

0.8

58

0.8

79

0.8

09

0.8

07

Mean.H

0.4

92

0.3

83

1.0

04

1.0

48

0.7

78

0.1

85

0.1

44

0.3

05

0.3

68

0.2

72

0.0

77

0.0

60

0.1

10

0.2

04

0.1

58

0.2

61

0.2

18

0.2

69

0.1

75

0.1

37

Med.H

0.4

92

0.3

83

0.8

74

0.9

57

0.7

57

0.1

85

0.1

44

0.2

92

0.3

51

0.2

71

0.0

77

0.0

60

0.1

09

0.2

03

0.1

58

0.2

61

0.2

18

0.2

63

0.1

73

0.1

37

0.1

Pro

b.A

0.8

93

0.8

00

1.0

00

1.0

00

0.9

85

0.0

43

0.0

12

0.8

65

0.6

95

0.6

90

0.0

00

0.0

00

0.0

00

0.4

38

0.4

46

0.8

13

0.7

88

0.8

95

0.3

22

0.3

19

Mean.A

0.4

06

0.3

17

0.6

55

0.6

53

0.5

13

0.1

74

0.1

36

0.2

91

0.2

81

0.2

20

0.0

76

0.0

59

0.1

51

0.1

22

0.0

95

0.3

52

0.2

91

0.4

01

0.0

75

0.0

60

Med.A

0.4

06

0.3

17

0.6

34

0.6

30

0.5

02

0.1

74

0.1

36

0.2

89

0.2

77

0.2

18

0.0

76

0.0

59

0.1

51

0.1

21

0.0

95

0.3

52

0.2

91

0.3

96

0.0

75

0.0

60

Pro

b.H

0.9

22

0.8

72

0.9

99

1.0

00

1.0

00

0.0

51

0.0

15

0.8

61

0.9

47

0.8

84

0.0

00

0.0

00

0.0

00

0.8

80

0.8

74

0.9

09

0.8

53

0.9

44

0.8

65

0.8

51

Mean.H

0.4

92

0.3

83

1.0

08

1.1

03

0.8

04

0.1

85

0.1

44

0.2

82

0.4

29

0.3

32

0.0

77

0.0

60

0.1

12

0.3

03

0.2

34

0.4

12

0.3

45

0.4

89

0.2

78

0.2

17

Med.H

0.4

92

0.3

83

0.8

83

0.9

84

0.7

84

0.1

85

0.1

44

0.2

76

0.4

19

0.3

30

0.0

77

0.0

60

0.1

12

0.2

99

0.2

33

0.4

12

0.3

45

0.4

32

0.2

74

0.2

15

n=

8192

0.5

Pro

b.A

0.8

35

0.8

27

0.9

99

0.9

99

0.9

97

0.7

18

0.5

91

0.8

59

0.7

99

0.8

13

0.0

00

0.0

00

0.0

42

0.5

93

0.5

75

0.8

46

0.8

40

0.8

79

0.4

48

0.4

47

Mean.A

0.3

52

0.2

74

0.5

56

0.5

55

0.4

36

0.1

42

0.1

10

0.2

28

0.2

27

0.1

78

0.0

57

0.0

45

0.0

99

0.0

92

0.0

72

0.1

82

0.1

51

0.1

90

0.0

56

0.0

46

Med.A

0.3

52

0.2

74

0.5

47

0.5

45

0.4

32

0.1

42

0.1

10

0.2

27

0.2

26

0.1

78

0.0

57

0.0

45

0.0

99

0.0

92

0.0

72

0.1

82

0.1

51

0.1

90

0.0

56

0.0

46

Pro

b.H

0.9

73

0.8

90

0.9

99

0.9

97

1.0

00

0.7

40

0.6

22

0.8

76

0.8

82

0.8

81

0.0

00

0.0

00

0.0

33

0.8

09

0.7

76

0.8

67

0.8

62

0.8

83

0.7

82

0.7

92

Mean.H

0.4

12

0.3

21

0.8

88

0.8

31

0.6

22

0.1

48

0.1

16

0.2

46

0.2

86

0.2

08

0.0

58

0.0

45

0.0

82

0.1

44

0.1

12

0.1

95

0.1

62

0.1

91

0.1

23

0.0

97

Med.H

0.4

12

0.3

21

0.7

22

0.7

69

0.6

15

0.1

48

0.1

16

0.2

36

0.2

70

0.2

07

0.0

58

0.0

45

0.0

82

0.1

44

0.1

12

0.1

95

0.1

62

0.1

89

0.1

22

0.0

97

0.1

Pro

b.A

0.8

25

0.7

96

0.9

98

0.9

98

0.9

94

0.0

67

0.0

23

0.8

72

0.7

25

0.7

21

0.0

00

0.0

00

0.0

00

0.4

40

0.4

52

0.8

07

0.7

82

0.8

77

0.3

06

0.3

41

Mean.A

0.3

52

0.2

74

0.5

59

0.5

58

0.4

38

0.1

42

0.1

10

0.2

33

0.2

28

0.1

78

0.0

57

0.0

45

0.1

12

0.0

92

0.0

72

0.2

68

0.2

22

0.2

94

0.0

56

0.0

46

Med.A

0.3

52

0.2

74

0.5

48

0.5

46

0.4

33

0.1

42

0.1

10

0.2

32

0.2

27

0.1

77

0.0

57

0.0

45

0.1

12

0.0

92

0.0

72

0.2

68

0.2

22

0.2

92

0.0

56

0.0

46

Pro

b.H

0.9

60

0.8

73

0.9

99

0.9

99

1.0

00

0.0

76

0.0

26

0.8

67

0.8

73

0.8

73

0.0

00

0.0

00

0.0

00

0.8

25

0.8

33

0.8

60

0.8

41

0.8

94

0.8

29

0.8

07

Mean.H

0.4

12

0.3

21

0.8

12

0.8

16

0.6

32

0.1

48

0.1

16

0.2

26

0.3

13

0.2

42

0.0

58

0.0

45

0.0

83

0.2

10

0.1

63

0.2

99

0.2

50

0.3

28

0.1

91

0.1

49

Med.H

0.4

12

0.3

21

0.7

20

0.7

71

0.6

24

0.1

48

0.1

16

0.2

23

0.3

09

0.2

42

0.0

58

0.0

45

0.0

83

0.2

08

0.1

62

0.2

99

0.2

50

0.3

04

0.1

89

0.1

49

Pro

b.A

,M

ean.A

,M

ed.A

denote

covera

ge

frequencie

s,m

ean

length

sand

media

nle

ngth

softh

enom

inal90%

confidence

inte

rvals

wit

hth

easy

mpto

tic

expre

ssio

nfo

rst

andard

err

ors

wit

hest

imate

dpara

mete

rs.

Pro

b.H

,M

ean.H

,M

ed.H

denote

covera

ge

frequencie

s,m

ean

length

sand

media

nle

ngth

softh

enom

inal90%

confidence

inte

rvals

wit

hth

efinit

esa

mple

Hess

ian

base

dappro

xim

ati

on

ofth

est

andard

err

ors

wit

hest

imate

dpara

mete

rs.

28

Figure 1: Estimates and CI(95%) of the memory parameter of volatility

10 120 230 m-0.1

0.1

0.3

0.5

0.7LPE95% CI (as. var.)95%CI (app. var.)

c) NLPE and 95% confidence intervals

a) LPE and 95% confidence intervals

10 120 230 m0.0

0.2

0.4

0.6

0.8 GSE95% CI (as. var.)95%CI (app. var.)

b) GSE and 95% confidence intervals

10 120 230 m-0.5

0.0

0.5

1.0

NLPE95% CI (as. var.)95%CI (app. var.)

10 120 230m

-0.2

0.3

0.8

1.3

1.8 MGSE95% CI (as. var.)95%CI (app. var.)

d) MGSE and 95% confidence intervals

10 120 230 m

-0.50.00.51.01.52.0 ALPE

95% CI (as. var.)95%CI (app. var.)

e) ALPE and 95% confidence intervals

29

Figure 2: Estimates of the memory parameter of volatility (m=25...200)

0 50 100 150 200 m

0.1

0.3

0.5

0.7

LPEGSENLPEALPEMGSE

30

Figure 3: Estimates and CI(95%) of the memory parameter of volatility (m=150...300)

160 220 280 m

0.100.150.200.250.300.35

c) NLPE and 95% confidence intervals

a) LPE and 95% confidence intervals

160 220 280 m0.12

0.17

0.22

0.27

0.32

b) GSE and 95% confidence intervals

160 220 280 m-0.1

0.1

0.3

0.5

160 220 280m

0.0

0.2

0.4

0.6

0.8

d) MGSE and 95% confidence intervals

160 220 280 m

-0.20.00.20.40.60.8

e) ALPE and 95% confidence intervals

31

top related