financial eco no metrics report

41
Financial Econometrics, fall 2011-2012 Professor Paulo Rodrigues; Grader Vladimir Otrachshenko FORECASTING VOLATILITY: AN ANALYSIS OF FTSE NAREIT US REAL ESTATE INDEX #420 Raquel Alexandra Dias; #427 Maria Helena Magro; #438 João Ramiro Santos 1. Introduction Forecasting volatility accurately is very important in the financial markets to investors, traders, risk managers and researchers. For that reason, since Engle (1982) first developed the Autoregressive Conditional Heteroscedasticity (ARCH), the published and working papers that study forecasting performance of various volatility models have increased exponentially, reaching a number of 692 1 , until now. Volatility is a statistical measure of the dispersion of returns for a given security or market index. 2 Volatility is a proxy for risk, not the risk itself. The volatility is used in many financial and economic areas (for instance, for pricing derivative securities: it shows the extent to which the returns of the underlying asset in options will fluctuate between a given day and the options’ expiration); used in financial risk management (mainly since Basle Accord in 1996); used to calculate the reserve capital in banks and trading houses (generally, it is at least three times the Value at Risk VaR); important for policy makers 1 Federal Reserve www.federalreserve.gov 2 Investopedia.com since there are evidences of a clear evidence of the link between financial market uncertainty and public confidence; and, at last, Federal Reserve (USA) and the Bank of England (UK), among others, take into account the volatility of stocks, bonds, currencies, and commodities in establishing its monetary policy. 3 This paper aims to not only model the volatility of FTSE NAREIT US Real Estate Index but also provide a good instrument for forecasting purposes. In order to test the volatility model, we believe that the best approach is to do a out-of- sample forecasting. Our purpose is to analyse this particular Index, specifically its volatility. In our opinion, this is an important feature to be analysed, since volatility in the property commercial was believed to be very low before the 2007/2008 subprime crisis; and its nature and magnitude has been the subject of much debate in the literature. Through our empirical envidences performed to the purposes of this analysis, we were able to create a volatility model regression that is able to 3 Forecasting Volatility in Financial Markets: A Review; Ser-Huang Poon and Clive W. J. Granger (2003)

Upload: joao-santos

Post on 10-Oct-2014

22 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Financial Eco No Metrics Report

Financial Econometrics, fall 2011-2012

Professor Paulo Rodrigues; Grader Vladimir Otrachshenko

FORECASTING VOLATILITY: AN ANALYSIS

OF FTSE NAREIT US REAL ESTATE INDEX #420 Raquel Alexandra Dias; #427 Maria Helena Magro; #438 João Ramiro Santos

1. Introduction

Forecasting volatility accurately is very

important in the financial markets to investors,

traders, risk managers and researchers. For that

reason, since Engle (1982) first developed the

Autoregressive Conditional Heteroscedasticity

(ARCH), the published and working papers that

study forecasting performance of various volatility

models have increased exponentially, reaching a

number of 6921, until now.

Volatility is a statistical measure of the

dispersion of returns for a given security or

market index.2 Volatility is a proxy for risk, not the

risk itself.

The volatility is used in many financial and

economic areas (for instance, for pricing

derivative securities: it shows the extent to which

the returns of the underlying asset in options will

fluctuate between a given day and the options’

expiration); used in financial risk management

(mainly since Basle Accord in 1996); used to

calculate the reserve capital in banks and trading

houses (generally, it is at least three times the

Value at Risk – VaR); important for policy makers

1 Federal Reserve – www.federalreserve.gov

2 Investopedia.com

since there are evidences of a clear evidence of

the link between financial market uncertainty and

public confidence; and, at last, Federal Reserve

(USA) and the Bank of England (UK), among

others, take into account the volatility of stocks,

bonds, currencies, and commodities in

establishing its monetary policy. 3

This paper aims to not only model the volatility

of FTSE NAREIT US Real Estate Index but also

provide a good instrument for forecasting

purposes. In order to test the volatility model, we

believe that the best approach is to do a out-of-

sample forecasting.

Our purpose is to analyse this particular Index,

specifically its volatility. In our opinion, this is an

important feature to be analysed, since volatility

in the property commercial was believed to be

very low before the 2007/2008 subprime crisis;

and its nature and magnitude has been the

subject of much debate in the literature.

Through our empirical envidences performed

to the purposes of this analysis, we were able to

create a volatility model regression that is able to

3 Forecasting Volatility in Financial Markets: A Review;

Ser-Huang Poon and Clive W. J. Granger (2003)

Page 2: Financial Eco No Metrics Report

forecast, and it is based on a EGARCH regression

model.

In the Section 2, we will analyse some of the

papers related to the forecastiong volatility topic,

and present the main conclusion we can draw

from them. Section 3 presents the data we have

used to run our analysis, taken form the

Bloomberg database. Section 4 introduces the

base model we have constructed in order to

analyse the volatility of this particular index, in

Section 5 we present our empirical results, this is,

the models we used in making volatility forecast

for our market indexes, namely the ARCH, GARCH

and GARCH derivatives approaches. Following, in

Section 6 we will analyse our previous models and

see which forecasts better the volatility in study

by making a out-of-sample forecast. We will finish

in Section 7 with the conclusion of our work.

In appendix, it can be found some specific

calculations and tabels that will be refered during

the paper, as well as a table with the research

papers we have consulted to improve this

research.

2. Literature Review

The econometrics and financial literature is

repleted with studies comparing the various time-

- series models and their capability to forecast

volatility. The existing models are mostly

compared in terms of four attributes: 1) the

relative weighting of recent versus older

observations, 2) the estimation criterion, 3) the

trade-off in terms of out-of-sample forecasting

error between simple and complex models, and 4)

the emphasis placed on large shocks.

The main debate has been about which of the

known models have the best performance. In one

of the earliest papers, Akigray (1989)

demonstrates that Generalized ARCH (GARCH)

outperformes the Exponentially Weighted Moving

Average (EWMA) and Historical Volatility (HIS)

models. Later, Cao and Tsay (1993) as well as

Heynen and Kat (1994) conclude that Exponential

GARCH (EGARCH) performs better for volatility

forecasting for small stocks and exchange rates.

In 2001, Bond and Wang studied the nature

and measure of the volatility in the commercial

property market in the UK, using a stochastic

volatility model.

We also based our study in the paper written

by Poon and Granger (2003), that cover 93 paper

researches and review their methodology and

empirical findings. They have concluded that

GARCH model and alternative versions (mainly

EGARCH and GJR-GARCH) are the most used in

these papers and perform better.

In adition, we also took into analysis the

Yolama and Sevil (2008) paper, whose purpose

was to employ seven different GARCH models (E-

GARCH, PARCH, TARCH, IGARCH, C-GARCH,

GARCH and GARCH-M) to forecast in-sample daily

stock market volatility in 10 different countries.

They found that asymetric volatility models

performed better in forecasting stock market

volatility than the historical model.

At last, we also took in count the 2010 paper,

written by Patton, that makes an analytical

analyses of how “less sensitive” forecast loss

functions can lead to incorrect inferences and

Page 3: Financial Eco No Metrics Report

selection of inferior forecasts over better

forecasts, focusing on volatility forecasting.

In Table 2 (found in Appendix), we summarize

all the papers we based on.

3. Data

The study covers 36 years (Jan 1972 – Dec

2008) of monthly data of the FTSE US Real Estate

Inde’s prices4. FTSE NAREIT US Real Estate Index is

designed to present investors with a

comprehensive family of REIT performance

indexes that spans the commercial real estate

space across the US economy. The National

Association of Real Estate Investment Trusts®

(NAREIT) is the worldwide representative voice

for REITs and publicly traded real estate

companies with an interest in U.S. real estate and

capital markets. NAREIT's members are REITs and

other businesses throughout the world that own,

operate, and finance income-producing real

estate, as well as those firms and individuals who

advise, study, and service those businesses.5

According to some financial empirical results,

as the ones from Fama and French Model (1988),

short-term predictions are too noisy when

compared to medium/long-term ones. Then, our

concern at the initial stage was about the

introduction of long-term explanatory variables

that would allow us to reduce this noisy effect.

For this purpose, we have decided to include

the 10-Year US Government Bond yields, the US

Gross Domestic Product Growth Rate and the

4 Listed companies in Table 1 - appendix

5 http://www.ftse.com/Indices/FTSE_EPRA_NAREIT_Glo

bal_Real_Estate_Index_Series/index.jsp

Unemployment Growth Rate as regressors. The

reasoning to do so is pretty intuitive: first, we

have considered important to include the T-Bond

yields because since they are a proxy for the

minimum return the marginal investor is willing to

receive for engaging in an adjusted/controlled risk

investment or to pay for a limited and safe loan,

we thought it could reasonably work as a “Bottom

Line Return”. Second, we wanted to include pro-

cyclical US GDP Growth variable but, in order to

obtain consistency along the model, US GDP

Growth statistics (which is reported quarterly)

needed to be given in monthly frequency as well,

and so we have used a proxy – the Industrial

Production Index, which is commonly used in

empirical works with this same function6. The

economic intuition behind it is that the US FTSE

Index (abbreviation used for the FTSE NAREIT US

Real Estate Index) is positively correlated with the

economy, so it is expected that some of the US

GDP growth will affect the returns of our index.

Third, the Unemployment Growth Rate is also a

directly related and cyclical series, which we

wanted to test if it has any explanatory power on

US FTSE returns patterns.

All these data were taken from the platform

Bloomberg, and treated using the Eviews 7.0

software packaging.

Both prices of the FTSE US Index and

Industrial Production Index, we turned into

returns, using the below formula:

Rt = log(Pt) - log(Pt-1) , where the Pt = close

price at time t, and Pt-1 = close price at the t-1

6 As example: Efficiency of Financial Intermediaries and

Economic Growth in CEEC, Andrus Oks (2001)

Page 4: Financial Eco No Metrics Report

period.

4. Methodology

As a first step in our research we will analyse

each of the previously explained variables, doing

stationary, autocorrelation and normalization

tests. After, we will construct a base model with

the significant variables and test it, recurring to

ARMA,correlation, etc., tests so we can have the

best model of the returns of the US FTSE Index.

4.1. Dependent Variable – US FTSE Index

As this paper aims to study the volatility

existing among the US FTSE Returns, we took US

FTSE Index as the raw input. For the purposes of

this study, we have considered important to test

each transformation of this dependent variable.

This way we could know more or less our

limitations, the corrections needed to be applied

and about what could we infer.

Once the prices themselves generally come

with struggling difficulties associated (as its non-

linearity), the level variable needed to be the

transformation of US FTSE Index into price

logarithms.

Its descriptive statistic is present in Figure I,

from which we could state that it has an expected

and made linear with an upward slopping (until

the last moment, when it drops almost vertically).

Essential tests (the same performed for ARMA

Model) are present in Figures II and III, from which

we could conclude that the level variable is

robustly non-stationary for a 10% significant level

considered (as KPSS conclusions confer

robustness to the ADF conclusions for being in

accordance) and that it has Serial Autocorrelation,

meaning it has persistent memory, mainly in its

first lag.

The same was developed for its first

differences (which is the variable in focus for this

paper). Regarding US FTSE Returns (the first

differences of the US FTSE Prices, the variable

characterized above), we perceive no trend in its

graph and a great change in its distribution (but

still not being normally distributed). From the two

stationarity tests, we could conclude that we have

statistical evidence enough to reject the null

hypothesis for the existence of a unit root and

that the series is stationary or follows an I(0)

(which is in accordance with what was expected

from first differences of a non stationary initial

series). Also, there is no Serial Autocorrelation

(despite the first lag being at the upper

boundary).

Why did we perform these tests for

stationarity and Serial Autocorrelation?

Stationarity is important to understand the effects

of events occurring in previous years in the

current period. And, if there is Serial

Autocorrelation, inferences taken from the model

might be biased.

The last test incurred was to confirm that

normality still not being applied to the distribution

of the dependent variable – the US FTSE Returns.

With the dependent variable analyzed and

properly corrected, we have proceeded with the

construction and analysis of a base model – that

means, with a regression that most accurately

Page 5: Financial Eco No Metrics Report

models the volatility in this specific index – which

will allows afterwards building the best

forecasting model.

4.2. Base Model

Before we construct this base model, it is

needed to assess the quality of data for each of

the explanatory variables, as we were essentially

concerned with the order of integration of each

variable, meaning, the stationarity within the

series. The apprehension about this issue had to

do with the fact that not accounting for the

presence of a unit root could lead to a Spurious

Regression (regression in which independent

variable has reasonable explanatory power,

despite not being related with the dependent

variable), which then could mislead all the

conclusions derived from the model, even though

the model seems correct and well specified.

The stationarity test was developed in three

steps: first, we have computed some standard

descriptive statistics in order to get some intuition

about the behaviour of the series; second, we

have looked to the graph of each variable in time

and the correspondent correlogram (more

specifically, the autocorrelation function, with the

aim to get sense about the memory of the series);

and third, we have applied two formal stationarity

tests – the Augmented Dickey-Fuller (ADF) and

the Kwiatkowski-Phillips-Schmidt-Shin (KPSS).

These two tests confer robustness to the

conclusions taken on stationarity. More

concretely, ADF test has as its null hypothesis the

existence of a unit root, though the series is non-

stationary, and contrarily the KPPS test the null is

that the variable is indeed stationary. As results of

their performance, four outcomes appeared: in

two of them, the tests agreed the (in) existence of

stationarity; and, in the remaining two, their

conclusions conflicted (remaining inconclusive).

Linearity issue was corrected through the

transformation of the series in logarithmic

variables, as we did for the Industrial Production

Index and US FTSE Index prices. According to the

Table 3 - Appendix, whenever previous tests

proved the existence of non-stationarity in the

non-transformed level series, they actually were

stationary in their first differences. Curiously, the

logarithm of the US FTSE prices was already

stationary in level as well as Unemployment Rate.

Regarding this last one, we used the

unemployment growth rate since we are trying to

explain the first differences of prices of the US

FTSE Index. (For full description see Appendix

Figures VIII to XXIII)

Summarizing the results, given that both the

ADF and KPSS tests confirmed the existence of

non-sationarity in our original series, we have

transformed non-linear and price denominated

series into logarithms and then we have taken the

first differences (either for the logarithmic and for

the rates denominated series) in order to

eliminate the unit root, which resulted in robust

stationary variables. Nevertheless, the use of the

differences implicitly eliminates any possible long-

rung relationship, since we were expecting that

the variable would stabilize and its first

differences would collapse to zero in the future:

Page 6: Financial Eco No Metrics Report

level variables are generally I(1), with the

exception of Unemployment Rate and US FTSE

Index Returns, and first differences are all I(0) ().

To assess the performance of the explanatory

variables, we have developed the following

contemporaneous equation:

Results presented in Table 4 - Appendix

showed statistical evidences against our initial

intuition: the parameters of the Industrial

Production Index and the Unemployment Growth

Rates first differences revealed to be insignificant

to explain US FTSE returns, meaning that they

have lower explanatory power and so they should

be disregarded for the next phases. So, we were

left with the significant first differences of the T-

Bond yields. Financially interpreting the significant

parameter of yields in the returns regression is

that an increase of 100 b.p. in the 10-year T-Bond

yields’ differential is expected to decrease the US

FTSE Returns by approximately 0.03%, on average,

ceteris paribus.

Once identified and excluded the non-

significant variables, we have regressed again the

US FTSE Returns in order to the remaining

significant variable, the 10-years T-Bond Yields’

differential, with the intention of understanding

what is not considered separately, instead in

accumulatively included in the residuals.

Observing the regression’s correlogram:

There are two important conclusions to be

highlighted: first, according to the Auto-

Correlogram Function (ACF) and the Partial Auto-

Correlation Function (PACF) (Figure XXIII -

Appendix), there is the need to introduce at least

three or four lags of the dependent variable and

three lags of the error term. Therefore, it seemed

that introducing an ARMA structure in the

regression it might help to explain even more the

returns, capturing explainable residuals from the

error term.

In order to choose the best ARMA model, we

have relied on four comparison criterions: (1) the

adjusted R-squared, (2) the Akaike Information

Criterion (AIC), (3) the Schwarz Information

Criterion (SIC) and (4) the number of

insignificance variables. The regression applied

was then:

The results from this kind of trial and error

process are expressed in Table 5 – Appendix.

In accordance with the information presented

on the previous table, the ARMA(3,3) Model is the

more consensual one when considering all the

four criteria applied: it is the one with the lowest

AIC (of approximately -3.196), it has the third and

the fourth best Adjusted R-squared and SIC,

respectively, and it is also the single one without

insignificant ARMA terms (just the constant,

which is not extracted from the model once it

ensures that once it ensures a more consistent fit

Page 7: Financial Eco No Metrics Report

through the reduction of the slope’s sensitivity

(since in the graph there is clearly an intercept).

The number of lags economically understandable

since a reasonable number of companies

composing the US FTSE Index that report their

earnings in a quarterly frequency (so, the results

announcement of the last three months can

actually impact the following period). Going even

further in the past influence, a year of lags could

be significant; however, it would be difficult to

find economic reasons for it.

In conclusion, our Base Model is as follows:

In Table 6 – Appendix, it can be seen the

output resulted from the Base Model. From it, we

concluded that all the variables are significant and

the model as a whole has statistical and economic

logic behind: the impact of the yields is negative

and small and the magnitude of Auto Regressive

and Moving Average components are quite high.

4.3. Analysis of the Base-Model

Once tested and selected the Base Model we

are finally able to focus on the objective of this

paper: modelling the volatility of US FTSE Index

Returns. As a first step, it was necessary to ensure

the inexistence of Serial Correlation in the Base

Model residuals through the performance of the

Breusch-Godfrey Test with three lags of residuals.

Table7 - Appendix exhibits the results of this test:

the LM-statistic is reasonably small (assuming a

value of 0.823), and so it lies in the null hypothesis

region, we do not reject the absence of serial

correlation.

Furthermore, it was also necessary to test for

non-linearity of the previous regression to

reinforce the correct specification of the model,

for what we have applied the Ramsey RESET Test

through the following auxiliary equation:

, b = 3

Running the equation above, we have got the

results summarized in the Table 8 - Appendix,

from which we concluded that the Ramsey RESET

Test rejected the null hypothesis for the absence

of linearity, since the log likelihood ratio is higher

than the Chi-square Critical Value for three

degrees of freedom (corresponding to the number

of restrictions in the model).

Finally, it was necessary to infer about

Normality of the model’s residuals. The

Autoregressive Conditional Heteroscedasticity

(ARCH) family requires errors to be well behaved,

which means that errors must follow a Gaussian

White Noise. In order to perform a Jarque-Bera

Test for Normality, it is required to check the

errors histogram. Figure XXIV - Appendix shows

statistical evidence to reject the null hypothesis

for normally distributed errors: the Jarque-Bera p-

value close to zero rejects the null hypothesis for

the joint test stating that the skewness and the

Page 8: Financial Eco No Metrics Report

kurtosis excess equal zero. Nonetheless, this

particular problem does not affect in any

circumstances our research and results, and

therefore it was not corrected.

Summing up, there was reasonable statistical

evidence supporting the inexistence of serial

correlation in the specified model. The application

of Autoregressive Conditional Heteroscedasticity

frameworks were due to two main reasons: first,

the Adjusted R-squared was relatively small (only

8.5%), and second, attempting to the Figure XXV -

Appendix, we have comprehended that much of

residuals’ variability was still left to be explained.

5. Empirical Results

5.1. Modelling Volatility

The Unconditional Variance is the expected

variance for the US FTSE Returns, without any

restrictions. Conditional Variance defines the

expected variance for the same variable but

restricted to the information disclosed in the prior

periods [so, we can state Conditional Variance is

denoted by E (σ2t | Information made available

on the market until the current period t)].

In order to infer about US FTSE Index volatility

forecasts, we used five models – the

Autoregressive Conditional Heteroscedasticity

(ARCH), Generalized Autoregressive Conditional

Heteroscedasticity (GARCH), the Exponential

Generalized Autoregressive Conditional

Heteroscedasticity (EGARCH), the Threshold

Generalized Autoregressive Conditional

Heteroscedasticity (TGARCH) and the Generalized

Autoregressive Conditional Heteroscedasticity in

Mean (GARCH-M). Intuitively, there would be no

need to apply the two simplest models – ARCH

and GARCH – once it does not have into account

the differential impacts of negative and positive

shocks, which, for this specific case of US FTSE

Index, is important to consider because of the

Real Estate Bubble in 2008’s that changed

Conditional volatility for this sector onwards –

seen in the Figure XXVI Appendix. Moreover,

ARCH and GARCH Models face statistical

problems, the ones that boosted the development

of GARCH’s extensions (nevertheless, still not

being able to correct all of the drawbacks of their

base models. For instance, ARCH Model’s order is

decided with basis on trial and error methodology

(even if achieved a good model, we cannot never

know if it is the best fitted to the data); also, the

number of lags (order) needed to capture all

conditional variance might be too large (and so

the model becomes not parsimonious, existing

better alternatives to it); finally, negativity

constrains might be violated (as the number of

parameters increase, the more likely there would

be parameters with negative estimated values).

ARCH Model is falling into disuse as GARCH Model

already takes into account all of its terms

[checking GARCH (q; p) equation, it can be

explained in other words as the ARCH (q) plus the

sum of p lags of the dependent variable]. GARCH

Model only improved some ARCH Model’s

limitations (for instance, it decreased the

probability of estimating negative parameters and

it needed a lower number of lags to capture the

Page 9: Financial Eco No Metrics Report

same, becoming the model more parsimonious), it

still having some drawbacks: the violation of non-

negativity conditions by the estimated model still

occurring; GARCH Model also does not account

for leverage effects (despite not being a relevant

modelling restriction for the Index studied in this

paper), although they can account for volatility

clustering (concept noted by Mandelbrot to

describe large changes tend to be followed by

large changes, of either sign, and small changes

tend to be followed by small changes”) and

leptokurtosis (the property of having a similar to a

“bell” distribution but with fatter tails) in series;

moreover, it does not allow direct feedbacks

between Conditional Volatility and Conditional

Mean.

All these models assume to have well behaved

and normally distributed errors. If this is not true,

it must be taken into account, for instance,

through Maximum Likelihood Estimation (MLE)

with robust variance/covariance estimators,

known as Quasi-Maximum Likelihood Estimation

(QMLE). As stated previously, for this specific case

residuals are well behaved, basing on the

Correlogram.

5.1.1. ARCH(q)

The ARCH Model is the simplest model of the

Autoregressive Conditional on Heteroscedasticity

family. The idea behind it is to model the variance

of the error term using an autoregressive

framework of a given order. The biggest

contribution of the model is that some time series

variable could be explained, not by changes in

exogenous variables but rather by the volatility

embodied in the series. This means that, there is a

non-linear relation between the original times

series and the i.i.d. shocks underlying it. Yet, these

models allow for the presence of linearity in

modelling the mean.

The truth is that ARCH has not been used

by practitioners or academics, essentially because

applying this framework brings a set of difficulties.

At the top, there is the non-negativity constraints

that might violated specially when including many

lags and also, the complexity in choosing the right

number of lags. Still, the most important is that

when modelling stock returns volatility we must

have in mind that this is a persistent

phenomenon. Meaning, that if the volatility is

high today it is expected to remain high

tomorrow, though to explain the volatility we

must include the volatility itself what does not

happen with ARCH.

This is the simplest model on Conditional

Heteroscedasticity, the easiest to handle and to

give us a first insight about the presence of ARCH

Effects within residuals. It takes care as well as of

clustered errors and nonlinearities (which was

kind of “forced” through the resource to

logarithms, whose important and widely known

function is the one of make variables linear and

progressing smoother); jointly contributing to

better forecasts. These factors were the motives

why we insisted in apply ARCH Model, despite

knowing ahead that it would be almost

immediately denied by realized evidence for the

reasons stated above.

Page 10: Financial Eco No Metrics Report

Assuming that the Base Model residuals are

well behaved but still non-homoscedastic, we can

set up a model with the aim of modelling the

variance of the residuals. More specifically it can

be expressed by following expression:

ARCH (q)

In Table 9 - Attachments are presented

the results for some ARCH models. In view of that,

we are able to conclude that the best model is

ARCH (2) essentially because is the one o makes

all regressors of the base model statistically

significant. This means that the unexplained

variation of US FTSE Returns from two months

prior have explanatory power over the

contemporaneous volatility. Over again, this

emphasizes the persistence character of the

volatility.

Finally, in the next sections we will discuss

more complex models that will enable us to

overcome some of the mentioned problems, at

the same time that we perform an out-of-sample

analysis.

5.1.2 GARCH(q,p)

GARCH model is the simplest extension to the

previously presented ARCH Model. This is true as

its expression is the ARCH equation added of a

term:

GARCH (q, p)

With ut being the residuals from the Base

Model and the conditional variance (computed

from relevant past information). The difference to

the previous model lies on the last term which is

the p optimal number of lags of the dependent

variable – Base Model’s Residuals.

This is the simplest model from GARCH

frameworks, correcting then some of ARCH’s

problems – its parsimony, its over-fit and its lower

frequency of non-negativity constraints violation.

Still, remaining common flaws demanded simple

GARCH Model to be tuned through the

reinvention/ extension of GARCH.

Bearing the theoretical background above in

mind, we have run GARCH Model covering all the

lags’ combinations until the 6. We considered half

of a year for any peculiar reason, it was just

because some models improved until reach the

fifth lag, but then all of them became worse. In

order to select the best orders’ combination of

GARCH Model, we have applied Likelihood Ratio

Test (the highest the best, given the null and

alternative hypothesis, presented in Table 10 and

Figure XXVII – Attachments) and Akaike

Information Criterion (the lowest the best,

immediately extracted from Eviews outputs). So,

through the collection of these two determinants

for each combination, given by the two tables on

the same attachment, we could conclude that

GARCH (4; 3) was the best one within GARCH’s

Models:

GARCH (4; 3)

Page 11: Financial Eco No Metrics Report

This means that variance is kind of persistent

for four months lagged; in other words, volatility

remains smooth for the period during what new

information from the companies is not publically

disclosed.

Even though, it turned some explanatory

variables insignificant (four ARMA variables are

now not significant), which might implicitly mean

that it would not be the model with the best fit to

the data we had. We were already expecting this

occurrence as returns are unpredictably volatile

and reacting more hysterically to negative rather

than positive effects; together with GARCH’s

boundaries, intuitively we thought we would need

to use further extensions to ARCH Model, once

this Index had a special and negative impact

recently.

For the following GARCH Model extensions, we

assumed Normal Error Distribution. This might not

be true as it accounts for volatility clustering but

kind of ignores their leptokurtosis inherent

characteristic. Nevertheless, it develops an

irrelevant role for our subject and conclusions.

5.1.3 EGARCH(q,p)

This extension/ asymmetry of the GARCH

Model takes into account the different impact of

negative and positive shocks on the sector, given

by:

EGARCH (q , p)

Since it is modelling the logarithm of the

variance, the variance will be always positive,

there is no need to impose non-negativity

constrains on parameters, and asymmetries/

leverage effects are allowed and accounted in the

term (whenever it is positive, it reflects that

positive shocks on the market generate less

volatility than negative shocks of similar

magnitude). In financial theory, this last

phenomenon is designated by overreaction of

common investors, and it is related to their level

of risk aversion and their deviations from

characteristics assumed for rational and efficient

investors (for instance, the preference for national

assets rather than the best ones). Although, if

numbers obtained were too small, we could have

some problems with values tending to infinite.

Again, applying the same procedures

sequentially explained for GARCH, we concluded

that EGARCH (5; 5) Model was the best fitted

against all the other options tested:

EGARCH (5, 5)

Financial intuition behind its orders may be

related to the volatility adjustments to analysts’

expectations or other more direct signals given to

investors, generally disclosed previously to the

firms’ announcements and so returns capture the

prior five months.

Observing more carefully the Table 11 and

Figure XXVIII in Attachments, we could notice that

four of coefficients came negative (moreover, the

first one was strongly and significantly negative),

which might be perceived against the non-

Page 12: Financial Eco No Metrics Report

negativity constrains for the parameters’

property. However, there is no misunderstanding:

EGARCH’s parameters can be negative, meaning

EGARCH does not impose any non-negativity

constrains, once it is a logarithmic variance and so

it is always positive.

5.1.4 TGARCH(q,p)/ GJR Model(q,p)

This second extension of GARCH Model also

accounts for asymmetries, it was also designed in

a way to capture leverage effects between returns

and volatility; although, it performs differently

through a dummy variable, being specified by:

TGARCH (q, p)

Basically, according to a Journal of Finance and

Economics about the EGARCH and TGARCH

Models “The leverage coefficients of the EGARCH

model are directly applied to the actual

innovations while the leverage coefficients of the

GJR model can connect to the model through an

indicator variable. For this case, if the asymmetric

effect occurs, the leverage coefficients should be

negative for the EGARCH model and positive for

the GJR model”, conferring robustness to each

other conclusions.

In which is the dummy variable:

equals the unit if the shock occurred in the market

was negative (if < 0) and equals 0 (so,

annulling the last term of the equation above)

otherwise (if the shock occurred in the market

was positive, inferred by a > 0). This way, a

negative shock will weigh more, which is in

accordance with financial theory: negative shocks

have higher impact on the economy than a

positive one, even if they have the same

magnitude – overreaction (concept explained in

EGARCH).

Running all over again the same procedures

from the last two models, the best one (as we

could conclude from Likelihood Ratio and the

Akaike Information Criterion in the Table 12 and

Figure XXIX was TGARCH (4,3):

TGARCH (4,3)

It has the same number of orders as GARCH, so

we could input to it the same reasons.

In order to apply the GARCH-in-Mean

framework, it was necessary to define the best

previously presented model to work as its base

model. This way, the one with the highest

Likelihood Ratio and the lowest Akaike

Information Criterion – the EGARCH (5,5) – was

applied to test for GARCH-M.

5.1.5 EGARCH-M(q,p)

The GARCH- type of model, initially proposed

by Engle et al. (1989), differs from previous by not

changing the specification model for the

conditional variance, but instead for including it as

an explanatory variable into our initial structural

model. The idea underlying the use of EGARCH-M

is that investors should be rewarded with higher

returns whenever they take additional risks. This

Page 13: Financial Eco No Metrics Report

financial thinking resulted in the GARCH-M as risk

strongly relies on the volatility existing on the

market, which is represented by:

If δ from (assuming that

follows a normal distribution with zero mean

and variance) is positive and statistically

significant, it means that an increase in the

conditional variance will increase the risk and so

the return demanded by investors, upgrading the

returns’ Index. Although intuitively this seems a

good idea, in practice these types of models

revealed controversial findings. By doing some

literature review we found that there are several

cases in which the risk-return relationship turned

out to be negative. Are examples of this the

followings, Lie et al. (2005) and Guedhami and Sy

(2005), while others found no significant relation

at all like Shin (2005) and Baillie and DeGenarro

(1990). Consequently at that point an economic

rationing behind this particular finding must be

drawn. Some authors refer that this negative risk-

return relationship is explained by a simple

understanding of business cycle, namely when the

economy is at a peak of a business cycle, when

typically expected returns are low, the better-

than-habit consumption levels make investors

more risk tolerant and thus require a lower

reward-to-risk.

Despite the fact that the prior justification

seems reasonable, another point of view

regarding the negative trade-off of risk-return

exists. Namely, the one presented by Lanne and

Saikkonen, which states that the negative

parameter on the conditional variance (or in any

other specification) is due to the non-exclusion of

the constant term in the base model when it is

included. Accordingly, when applying the GARCH-

M framework one should impose that the

constant term is equal to zero. Theoretically

makes sense, since we do not expected that

returns is explained by a constant, meaning that,

for the next period one expected the previous

value (referring to past values) plus some

structure, in which the conditional variance can be

included (also, in our case is given by the Yields’

Differential) and a random shock.

In this sense, there are various practical

aspects to bear in mind when including the

conditional variance modelled by an

Autoregressive Conditional Heteroscedasticity

framework as a regressor. Fundamentally, the

model specification can be linear or logarithmic in

the conditional variance. Therefore three different

regressors can be introduced in the base model:

first the conditional variance itself ( ), second

the conditional standard deviation component ()

and thirdly the logarithm of the conditional

variance ( ). In addition to this, there is the

necessity of specifying the underlying model,

which in our case is given by the EGARCH that

corresponds precisely to our best one, selected

previously.

As previously stated, one have to choose the

best EGARCH specification, in this sense we run

the three presented specifications which are

presented in Tables 13, 14 and 15 - attachments.

Page 14: Financial Eco No Metrics Report

From the previous tables we are able to exclude

the conditional variance approach at once (Table

13), because it is the only one that is not

individually significant. A possible explanation

could be that the “number” of variance per se as

no financial meaning, it is not much informative, is

if we had a problem of scale between the

variables. By performing the remaining

regressions, we found that the conditional

standard deviation and the logarithm conditional

variance are indeed statistically significant.

However the parameters on these regressors are

slightly negative even though it is strongly

statistical significance, meaning that we are back

to the scenario presented above.

Again, in order to choose the best model we

applied some criteria comparison as previously,

namely, looked to the log-likelihood ratio and to

Akaike information criterion plus the insignificant

variables in the structural model. In particularly,

given the wide range of tests performed there

was one fundamental condition that had to be

met in order to the model e accepted, that the

parameter on the conditional variance is

statistically significance. Otherwise it would not

make sense to use the model in the forecasts.

Having this in mind the conclusions are

straightforward, for the case of the inclusion of

the conditional variance the best model is the

EGARCH-M (4; 3), essentially because all base

model’ parameters are significant and it as the

best others criterions. In what relates the

logarithm of the conditional variance, the best fit

has been obtained by the EGARCH-M (3; 4), the

reasons are the best log-likelihood and AIC with

the lowest number of insignificant parameters.

Finally, the best conditional standard deviation

model is the EGARCH-M (3; 3), because it had the

best AIC information criteria and none of the

parameters were statistically insignificant.

In Table 16 - Attachment we present a

comparative analysis of the best fitted models, in

order to achieve the best in-mean specification.

Accordingly, the overall model and specification is

EGARCH-M (4; 3) fundamentally because it had

the best AIC of all the analysis as well as the log

likelihood, simultaneously with no insignificant

parameters.

6. Forecasting Volatility

6.1. Out-of-Sample Analysis

As we have previously observed,the best

models we have found that best fit the volatility

are the ARCH(2), GARCH(4;3), EGARCH(5,5),

TGARCH(4,3) and GARCH-M(4;4). In order to see

which one of them forecasts most perfectly the

volatility of the US FTSE Index, we performed an

out-of-sample analysis, concerning the period

between Jan 2009 and Nov 2011. We have

choosen this particular sample size to perform this

analysis because, since it incorporates the 2008

mortgage crisis, it accounts for the crisis effects

on volatility (as mencioned before, there are

empirical financial researches that have found

that “(…)As shown, stock market volatility displays

a strong countercyclical pattern—peaking just

before or during recessions and falling sharply late

Page 15: Financial Eco No Metrics Report

in recessions or early in recovery periods.(…)” 7).

Since we are currently living a crisis again (the

Euro Crisis), we wanted to have a crisis effect in

our model, so it would better forecast the

volatility for the period 2009-2011, and most

importantly, so it could accurately forecast the

volatility in this index for the next periods ahead.

The out-of-sample forecast analysis are used

by forecasters to determine if a proposed leading

indicator is potentially useful for forecasting a

target variable. The steps for conducting an out-

of-sample forecasting experiment are as follows:

1) Divide the available data on the target

variable and the proposed leading indicator

(both stationary) into two parts: the in-sample

data set (roughly 80% of the data – in our

particular case, and due to the reason preciously

above explained it ascended to ≈93%) and the

out-of-sample data set (the remaining 20% of

the entire data set – in our case ≈7%). As

noticed, this first step was already made.

2) Once chosen the in-sample data set, is

should be choosen competing forecasting

models – already explained in section 5. With

these forecastng models (EGARCH, GARCH,

TGARCH, GARCH-M and ARCH) . It is these

competing models that we are going to run an

out-of-sample “horserace” with.

3) To run a horserace (i.e. forecasting

competition) between these models, we must

7 Stock Market Volatility: Reading the Meter, Hui Guo

(2002), Economic Synopses Nr. 6

“roll” each model through the out-of-sample

data set one observation at a time while each

time forecasting the target variable the chosen h

periods ahead.

4) Now to decide the winner of the

horserace between the models, we must

calculate the Average Loss associated with these

various models: the “standard” average loss

functions Mean-Squared Error (MAE), Mean

Absolute Error (MSE) and the Mean Absolute

Percentage Error (MAPE):

In statistics, the mean square error or MSE of

an estimator is one of many ways to quantify the

difference between an estimator and the true

value of the quantity being estimated. MSE is a

risk function, corresponding to the expected value

of the squared error loss or quadratic loss. MSE

measures the average of the square of the

“error.” The error is the amount by which the

estimator differs from the quantity to be

estimated. The difference occurs because of

randomness or because the estimator doesn’t

account for information that could produce a

more accurate estimate.

The MSE is the second moment (about the

origin) of the error, and thus incorporates both

the variance of the estimator and its bias. For an

unbiased estimator, the MSE is the variance. Like

the variance, MSE has the same unit of

measurement as the square of the quantity being

estimated. In an analogy to standard deviation,

Page 16: Financial Eco No Metrics Report

taking the square root of MSE yields the root

mean squared error or RMSE, which has the same

units as the quantity being estimated; for an

unbiased estimator, the RMSE is the square root

of the variance, known as the standard error.

The MAE measures the average magnitude of

the errors in a set of forecasts, without

considering their direction. It measures accuracy

for continuous variables. The equation is given in

the library references. Expressed in words, the

MAE is the average over the verification sample of

the absolute values of the differences between

forecast and the corresponding observation. The

MAE is a linear score which means that all the

individual differences are weighted equally in the

average.

Where N is the number of observation in the

out-of-sample data, is the observed and real

volatility – calculated through the squared

differences of the returns ; and

is the volatility forecasted by the model in

analysis.

5) The forecasting method that has the

smallest MAE and MSE average losses in the

out-of-sample forecasting experiment is the

superior forecasting method. If one forecasting

method has a better MAE measure while the

other forecasting method has the better MSE

method then you have a split decision. Then

the only way you can determine a winner

between the competing forecasting models is to

break down and choose one of the average loss

functions to base your choice on, either the MAE

average loss function or the MSE average loss

function.

After we have performed the above steps

described, we were able to conclude which of

these models have the best and strongest

forecasting power (see Table 17 in Appendix). The

one with the lowest both MSE and MAE is the

EGARCH model, which goes along with our

expectations and other papers on the same

subject of forecasting volatility.

6.2. After October 2011

After we have chosen the best model that

most accuretly forecasts the volatility of the FTSE

NAREIT US Real Estate Index, this is, the E-

GARCH(5;5). Taking into account the forecast of

the future 10-year US Treasury bond yields and

the returns of the index in analysis for the next 3

months (November 2011, December 2011 and

January 2012), we were able to forecast the

volatility for the same period, as seen in the table

below.

After Oct 2011

E-GARCH (5;5) Forecast model

2011M10 0.013876

2011M11 0.014090

2011M12 0.011810

2012M01 0.010485

As we can see, the is expected to remain around

1%. This low volatility can be explained by the

Page 17: Financial Eco No Metrics Report

interventation of the US Government on the

mortgages agencies in this last 2/3 years, and due

to the fact that real estate investments still

represent a lower risk than other investments like

sotcks or option. (Real estate investments have

strong history of total return. Over a period of 30

years from 1977 to 2007, almost 80% of the total

U.S. real estate return came from income flows.

This helps bring down volatility, as investments

which rely on income returns result in being less

volatile than the ones relying heavily on capital

value returns)8.

7. Conclusion

We have compared the forecasting ability of

several volatility models – the ARCH, GARCH,

EGARCH, T-GARCH and EGARCH-M - focusing on

four issues: (1) the proper weighting of older

versus recent observations, (2) the relevance of

the parameter estimation procedure, (3) the

proper weighting of large return surprises, and (4)

the effect of a recession in the returns and

volatility.

On the face of it, one could argue that the

empirical evidence provided in this paper suggests

that it is possible to produce a volatility

modelling model that can be used to forecast

the volatility of the FTSE NAREIT US Real Estate

Index. Moreover, we have found empirical

evidences that in fact the best model to do this

forecast is the E-GARCH (5; 5) volatility model,

since it takes into consideration the negative 8http://www.comparebroker.com/blog/2011/10/20/

is-it-wise-to-invest-in-real-estate-amid-economic-uncertainties/

shocks of such crisis as the 2008 mortgage bubble

and the current Euro crisis we are facing now –

that even though it most felt in Europe, it has

some effects on the US real estates returns and

volatility.

Our evidences are coherent with other papers

like Chang Su (2010)9, that also tested models to

forecast volatility and concluded that EGARCH

model accomodates better the leverage effect,

volatility persistence, fat tails and skewness.

9 Application of EGARCH Model to Estimate Financial

Volatility of Daily Returns: The empirical case of Chine; University of Gothenburg

Page 18: Financial Eco No Metrics Report
Page 19: Financial Eco No Metrics Report

APPENDIX

Company Listed In FTSE NAREIT US Real Estate

Index – TABLE 1

A Acadia Realty Trust

Alexandria Real Estate Equities, Inc.

American Campus Communities, Inc.

American Realty Capital Healthcare Trust

American Realty Capital Properties, Inc.

American Tower Corporation

Anderson-Tully Company

Apartment Investment & Management Company

Apollo Residential Mortgage Inc.

Apple REIT Nine, Inc.

Apple REIT Six, Inc.

Archstone

Ashford Hospitality Trust, Inc.

AvalonBay Communities, Inc.

Acadia Realty Trust

Alexandria Real Estate Equities, Inc.

American Campus Communities, Inc.

American Realty Capital Healthcare Trust

American Realty Capital Properties, Inc.

American Tower Corporation

Agree Realty Corporation

American Assets Trust

American Capital Agency Corp.

American Realty Capital New York Recovery REIT, Inc.

American Realty Capital Trust

Americold Realty Trust

Annaly Capital Management, Inc.

Apollo Commercial RE Finance, Inc.

Apple REIT Eight, Inc.

Apple REIT Seven, Inc.

Arbor Realty Trust, Inc.

ARMOUR Residential REIT

Associated Estates Realty Corporation

B Behringer Harvard Multifamily REIT I

Behringer Harvard Opportunity REIT II

Berkshire Income Realty

Blackstone Real Estate Advisors

Boston Properties, Inc.

BRE Properties, Inc.

Brookfield Office Properties

Behringer Harvard Opportunity REIT I

Behringer Harvard REIT I, Inc.

BioMed Realty Trust, Inc.

Boardwalk REIT

Brandywine Realty Trust

Broadstone Net Lease, Inc.

C Camden Property Trust

Capital Trust, Inc.

Capstead Mortgage Corporation

Carey Watermark Investors Incorporated

Cedar Realty Trust, Inc.

Chesapeake Lodging Trust

CNL Lifestyle Properties, Inc.

Cole Credit Property Trust II, Inc.

Cole Credit Property Trust, Inc.

Colony Financial, Inc.

CoreSite Realty Corporation

Corporate Property Associates 15

Corporate Property Associates 17 - Global, Inc.

CREXUS Investment Corp.

CYS Investments, Inc.

Campus Crest Communities

CapLease, Inc.

Care Investment Trust, Inc.

CBL & Associates Properties, Inc.

Chatham Lodging Trust

Chimera Investment Corporation

Cogdell Spencer Inc.

Cole Credit Property Trust III, Inc.

Colonial Properties Trust

CommonWealth REIT

Corporate Office Properties Trust

Corporate Property Associates 16 - Global, Inc.

Cousins Properties Incorporated

CubeSmart L.P.

D DCT Industrial Trust Inc.

Derwent London Plc

Digital Realty

Duke Realty Corporation

Dynex Capital, Inc.

DDR Corp.

DiamondRock Hospitality Company

Dividend Capital Total Realty Trust Inc.

DuPont Fabros Technology, Inc.

E EastGroup Properties, Inc.

Page 20: Financial Eco No Metrics Report

Electric Infrastructure Alliance of America, LLC

Equity Lifestyle Properties, Inc.

Equity Residential

Excel Trust, Inc.

Education Realty Trust, Inc.

Entertainment Properties Trust

Equity One, Inc.

Essex Property Trust, Inc.

Extra Space Storage, Inc.

F Fair Value REIT-AG

Federal Realty Investment Trust

First Industrial Realty Trust, Inc.

First REIT of New Jersey

Forest City Enterprises, Inc.

Federal Capital Partners

FelCor Lodging Trust Incorporated

First Potomac Realty Trust

Forest Capital Partners LLC

Franklin Street Properties Corp.

G Gables Residential Trust

Getty Realty Corp.

Glimcher Realty Trust

Global Income Trust, Inc.

Government Properties Income Trust

General Growth Properties, Inc.

Gladstone Commercial Corporation

Global Growth Trust, Inc.

Global Logistic Properties

Gramercy Capital Corp.

H Hammerson PLC

HCP, Inc.

Hersha Hospitality Trust

Hines Global REIT, Inc

Home Properties, Inc.

Host Hotels & Resorts, Inc.

Hatteras Financial Corp

Health Care REIT, Inc.

Highwoods Properties, Inc.

Hines Real Estate Investment Trust, Inc.

Hospitality Properties Trust

Hudson Pacific Properties, Inc.

I Independence Realty Trust

Inland American Real Estate Trust, Inc.

Inland Real Estate Corporation

INREIT Real Estate Investment Trust

Investors Real Estate Trust

Industrial Income Trust, Inc.

Inland Diversified Real Estate Trust, Inc.

Inland Western Retail Real Estate Trust, Inc.

Invesco Mortgage Capital Inc.

IStar Financial Inc.

J Japan Retail Fund Investment Corporation

K KBS Legacy Partners Apartment REIT, Inc.

KBS Real Estate Investment Trust II, Inc.

Kenedix Realty Investment Corporation

Kimco Realty Corporation

KBS Real Estate Investment Trust I, Inc.

KBS Strategic Opportunity REIT, Inc.

Kilroy Realty Corporation

Kite Realty Group Trust

L Land Securities Group PLC

Lexington Realty Trust

LTC Properties, Inc.

LaSalle Hotel Properties

Liberty Property Trust

M

MAA

Mack-Cali Realty Corporation

Medical Properties Trust Inc.

MHI Hospitality Corporation

MPG Office Trust, Inc.

Macerich

MCR Development LLC

MFA Financial, Inc.

Monmouth Real Estate Investment Corporation

N National Retail Properties, Inc.

Northstar Real Estate Income Trust, Inc.

Newcastle Investment Corporation

NorthStar Realty Finance Corporation

O Omega Healthcare Investors, Inc.

One Liberty Properties, Inc.

P Parkway Properties, Inc.

Pennsylvania Real Estate Investment Trust

Piedmont Office Realty Trust, Inc.

Post Properties, Inc.

Prologis, Inc.

Page 21: Financial Eco No Metrics Report

Public Storage

Pebblebrook Hotel Trust

Phillips Edison - ARC Shopping Center REIT

Plum Creek Timber Company, Inc.

Potlatch Corporation

PS Business Parks, Inc.

R RAIT Financial Trust

Rayonier Inc.

Regency Centers Corporation

RioCan

RREEF America REIT II, Inc.

Ramco-Gershenson Properties Trust

Realty Income Corporation

Resource Capital Corp.

RLJ Lodging Trust

RREEF America REIT III, Inc.

S Sabra Health Care REIT, Inc

SEGRO PLC

Shaftesbury PLC

SL Green Realty Corp.

Sovran Self Storage, Inc.

Stag Industrial, Inc.

Steadfast Income REIT

Summit Hotel Properties Inc.

Sunstone Hotel Investors, Inc.

Saul Centers, Inc.

Senior Housing Properties Trust

Simon Property Group, Inc.

Societe De La Tour Eiffel

Spirit Finance Corporation

Starwood Property Trust, Inc.

Strategic Hotels & Resorts, Inc.

Sun Communities, Inc.

Supertel Hospitality, Inc.

T

Tanger Factory Outlet Centers, Inc.

The Community Development Trust

Two Harbors Investmet Corp.

Taubman Centers, Inc.

Thomas Properties Group Inc.

U UDR, Inc.

Urstadt Biddle Properties, Inc.

UMH Properties, Inc.

V Ventas, Inc.

Vornado Realty Trust

Verde Realty

W W. P. Carey & Co. LLC

Watson Land Company

Wells Real Estate Investment Trust II, Inc.

Wereldhave USA, Inc.

Weyerhaeuser

Washington Real Estate Investment Trust

Weingarten Realty Investors

Wells Timberland REIT, Inc.

Westfield, LLC

Winthrop Realty Trust

Literature Review – TABLE 2

Title Author Year

Forecasting Stock

Market Volatility and

the Application of

Volatility Trading

Models

Jason Laws and

Andrew Gidman

2000

A Measure of Fundamental Volatility

in the Commercual Property Market

Shaun Bond and Soosung Hwang

2001

Forecasting Volatility

in Financial Markets: A

Review

Ser-Huang Poon

and Clive W. J.

Granger

2003

Forecasting Volatility Louis H.

Ederingtion and

Wei Guan

2004

Forecasting Volatility

in the Financial

Markets

John Knight and

Stephen Satchell

2007

Forecasting World

Stock Markets

Volatility

Abdullah Yalama

and Guven Sevil

2008

Volatility forecast comparison using

imperfect volatility proxies

Andrew Patton 2010

Page 22: Financial Eco No Metrics Report

Critical Value

Do not reject H0

Critical Value

Reject H0

Figure I – Natural Logarithm of US FTSE Prices

(Level Dependent Variable)

Figure II. – Stationarity in Level

a) Augmented Dickey-Fuller Unit Root Test (ADF)

H0: Unit Root / Non-Stationarity

H1: ~ I (0) / Stationarity

a) Kwiatkowski-Phillips-Schmidt-Shin Unit Root

Test (KPSS)

H0: Stationarity

H1: Non-Stationarity

Both models with Trend and Intercept because

there is a clear pattern through the observation of

the correspondent graph.

Page 23: Financial Eco No Metrics Report

Figure III. – Serial Correlation

Figure IV. – Natural Logarithm of US FTSE Returns

(First Differences Dependent Variable)

Figure V. – Stationarity in First Differences

a) ADF

b) KPSS

Figure VI. – Serial Autocorrelation

Page 24: Financial Eco No Metrics Report

2

4

6

8

10

12

14

16

1975 1980 1985 1990 1995 2000 2005

annual_Yields_10yearUSGovernmentBonds

0

10

20

30

40

50

2 4 6 8 10 12 14 16

Series: I10

Sample 1972M01 2008M12

Observations 444

Mean 7.428336

Median 7.175500

Maximum 15.84200

Minimum 2.212300

Std. Dev. 2.631976

Skewness 0.811579

Kurtosis 3.295220

Jarque-Bera 50.35327

Probability 0.000000

TABLE 3 – Test-Statistics for Stationarity Tests

i_10 d_i10

ln_indprod d_ln_indprod

Unem Growth

ADF Test:

Constant

-18.16305

-7.092998 Constant&Trend -2.628751

-2.289439

-2.704203*

KPSS Test: Constant

0.251500

0.121693 Constant&Trend 0.341894

0.226043

0.998462

Conclusion I (1) I (0)

I (1) I (0)

I (0)

(Robust) (Robust)

(Robust) (Robust)

(Robust)

*Test valid only for 10% Significance Level ** The remaining tests are valid for 10%, 5% and 1% significance Levels

*** ”Robust” implies that both KPSS and ADF agree on the stationary test result

ANALYSIS OF THE SERIES YIELDS ON US GOVERNMENT BONDS

FIGURE VII - Graph and Descriptive Statistics of Yields on 10-Year US Government Bonds

FIGURE VIII – Correlogram: Yields on 10-Year US Government Bonds

Page 25: Financial Eco No Metrics Report

Null Hypothesis: I10 has a unit root

Exogenous: Constant, Linear Trend

Lag Length: 1 (Automatic based on SIC, MAXLAG=17) t-Statistic Prob.*

Augmented Dickey-Fuller test statistic -2.628751 0.2677

Test critical values: 1% level -3.978956

5% level -3.420022

10% level -3.132657

*MacKinnon (1996) one-sided p-values.

Augmented Dickey-Fuller Test Equation

Dependent Variable: D(I10)

Method: Least Squares

Date: 12/04/11 Time: 22:56

Sample (adjusted): 1972M03 2008M12

Included observations: 442 after adjustments Coefficient Std. Error t-Statistic Prob.

I10(-1) -0.022864 0.008698 -2.628751 0.0089

D(I10(-1)) 0.143401 0.047254 3.034718 0.0026

C 0.270099 0.096968 2.785460 0.0056

@TREND(1972M01) -0.000483 0.000178 -2.710343 0.0070

R-squared 0.038146 Mean dependent var -0.008664

Adjusted R-squared 0.031558 S.D. dependent var 0.369062

S.E. of regression 0.363192 Akaike info criterion 0.821239

Sum squared resid 57.77596 Schwarz criterion 0.858265

Log likelihood -177.4939 Hannan-Quinn criter. 0.835843

F-statistic 5.790183 Durbin-Watson stat 1.969930

Prob(F-statistic) 0.000689

Null Hypothesis: I10 is stationary

Exogenous: Constant, Linear Trend

Bandwidth: 16 (Newey-West using Bartlett kernel) LM-Stat.

Kwiatkowski-Phillips-Schmidt-Shin test statistic 0.341894

Asymptotic critical values*: 1% level 0.216000

5% level 0.146000

10% level 0.119000

*Kwiatkowski-Phillips-Schmidt-Shin (1992, Table 1)

Residual variance (no correction) 3.998481

HAC corrected variance (Bartlett kernel) 58.95914

KPSS Test Equation

Dependent Variable: I10

Method: Least Squares

Date: 12/04/11 Time: 22:57

Sample: 1972M01 2008M12

Included observations: 444 Coefficient Std. Error t-Statistic Prob.

C 10.37798 0.189904 54.64865 0.0000

@TREND(1972M01) -0.013317 0.000742 -17.94527 0.0000

R-squared 0.421491 Mean dependent var 7.428336

Adjusted R-squared 0.420182 S.D. dependent var 2.631976

S.E. of regression 2.004139 Akaike info criterion 4.232801

Sum squared resid 1775.326 Schwarz criterion 4.251250

Log likelihood -937.6817 Hannan-Quinn criter. 4.240076

F-statistic 322.0326 Durbin-Watson stat 0.033841

Prob(F-statistic) 0.000000

-2.0

-1.5

-1.0

-0.5

0.0

0.5

1.0

1.5

2.0

1975 1980 1985 1990 1995 2000 2005

D_I10

0

10

20

30

40

50

60

70

-1.5 -1.0 -0.5 0.0 0.5 1.0 1.5

Series: D_I10

Sample 1972M01 2008M12

Observations 443

Mean -0.008758

Median -0.015000

Maximum 1.590000

Minimum -1.880000

Std. Dev. 0.368650

Skewness -0.266906

Kurtosis 6.103445

Jarque-Bera 183.0388

Probability 0.000000

FIGURE IX - 10-Year US Government Bonds – ADF Test and KPSS test with Constant and Trend

FIGURE X - Graph and Descriptive Statistics of Yields on 10-Year US Government Bonds’ Differential

Page 26: Financial Eco No Metrics Report

FIGURE XI - Correlogram – on Yields on 10-Year US Government Bonds’ Differential

FIGURE XII – Yields on 10-Year US Government Bonds’ Differential - ADF Test with Constant

Null Hypothesis: D_I10 has a unit root

Exogenous: Constant

Lag Length: 0 (Automatic based on SIC, MAXLAG=17) t-Statistic Prob.*

Augmented Dickey-Fuller test statistic -18.16305 0.0000

Test critical values: 1% level -3.444923

5% level -2.867859

10% level -2.570200

*MacKinnon (1996) one-sided p-values.

Augmented Dickey-Fuller Test Equation

Dependent Variable: D(D_I10)

Method: Least Squares

Date: 12/04/11 Time: 22:59

Sample (adjusted): 1972M03 2008M12

Included observations: 442 after adjustments Coefficient Std. Error t-Statistic Prob.

D_I10(-1) -0.861064 0.047407 -18.16305 0.0000

C -0.007667 0.017409 -0.440435 0.6598

R-squared 0.428494 Mean dependent var -0.001488

Adjusted R-squared 0.427196 S.D. dependent var 0.483495

S.E. of regression 0.365927 Akaike info criterion 0.831750

Sum squared resid 58.91721 Schwarz criterion 0.850263

Log likelihood -181.8168 Hannan-Quinn criter. 0.839052

F-statistic 329.8962 Durbin-Watson stat 1.968687

Prob(F-statistic) 0.000000

Page 27: Financial Eco No Metrics Report

3.6

3.8

4.0

4.2

4.4

4.6

4.8

1975 1980 1985 1990 1995 2000 2005

LN_INDPROD

0

5

10

15

20

25

30

35

3.750 3.875 4.000 4.125 4.250 4.375 4.500 4.625

Series: LN_INDPROD

Sample 1972M01 2008M12

Observations 444

Mean 4.161030

Median 4.120093

Maximum 4.612385

Minimum 3.685012

Std. Dev. 0.281439

Skewness 0.132013

Kurtosis 1.658962

Jarque-Bera 34.55972

Probability 0.000000

FIGURE XIII - Yields on 10-Year US Government Bonds’ Differential – KPSS Test with Constant and Trend

ANALYSIS OF THE SERIES INDUSTRIAL PRODUCTION (PROXY FOR GDP)

FIGURE XIV – Graph and Descriptive Statics on Industrial production Index

Null Hypothesis: D_I10 is stationary

Exogenous: Constant

Bandwidth: 0 (Newey-West using Bartlett kernel) LM-Stat.

Kwiatkowski-Phillips-Schmidt-Shin test statistic 0.251500

Asymptotic critical values*: 1% level 0.739000

5% level 0.463000

10% level 0.347000

*Kwiatkowski-Phillips-Schmidt-Shin (1992, Table 1)

Residual variance (no correction) 0.135596

HAC corrected variance (Bartlett kernel) 0.135596

KPSS Test Equation

Dependent Variable: D_I10

Method: Least Squares

Date: 12/04/11 Time: 23:00

Sample (adjusted): 1972M02 2008M12

Included observations: 443 after adjustments Coefficient Std. Error t-Statistic Prob.

C -0.008758 0.017515 -0.500014 0.6173

R-squared 0.000000 Mean dependent var -0.008758

Adjusted R-squared 0.000000 S.D. dependent var 0.368650

S.E. of regression 0.368650 Akaike info criterion 0.844316

Sum squared resid 60.06899 Schwarz criterion 0.853556

Log likelihood -186.0159 Hannan-Quinn criter. 0.847960

Durbin-Watson stat 1.716230

Page 28: Financial Eco No Metrics Report

Null Hypothesis: LN_INDPROD has a unit root

Exogenous: Constant, Linear Trend

Lag Length: 3 (Automatic based on SIC, MAXLAG=12) t-Statistic Prob.*

Augmented Dickey-Fuller test statistic -2.289439 0.4383

Test critical values: 1% level -3.979052

5% level -3.420068

10% level -3.132684

*MacKinnon (1996) one-sided p-values.

Augmented Dickey-Fuller Test Equation

Dependent Variable: D(LN_INDPROD)

Method: Least Squares

Date: 12/04/11 Time: 22:21

Sample (adjusted): 1972M05 2008M12

Included observations: 440 after adjustments Coefficient Std. Error t-Statistic Prob.

LN_INDPROD(-1) -0.014756 0.006445 -2.289439 0.0225

D(LN_INDPROD(-1)) 0.237454 0.047774 4.970377 0.0000

D(LN_INDPROD(-2)) 0.181603 0.048457 3.747748 0.0002

D(LN_INDPROD(-3)) 0.186976 0.048650 3.843322 0.0001

C 0.055354 0.023697 2.335970 0.0199

@TREND(1972M01) 3.00E-05 1.42E-05 2.113942 0.0351

R-squared 0.192672 Mean dependent var 0.001768

Adjusted R-squared 0.183371 S.D. dependent var 0.007473

S.E. of regression 0.006753 Akaike info criterion -7.144160

Sum squared resid 0.019791 Schwarz criterion -7.088431

Log likelihood 1577.715 Hannan-Quinn criter. -7.122175

F-statistic 20.71512 Durbin-Watson stat 1.984700

Prob(F-statistic) 0.000000

Null Hypothesis: LN_INDPROD is stationary

Exogenous: Constant, Linear Trend

Bandwidth: 16 (Newey-West using Bartlett kernel) LM-Stat.

Kwiatkowski-Phillips-Schmidt-Shin test statistic 0.226043

Asymptotic critical values*: 1% level 0.216000

5% level 0.146000

10% level 0.119000

*Kwiatkowski-Phillips-Schmidt-Shin (1992, Table 1)

Residual variance (no correction) 0.002653

HAC corrected variance (Bartlett kernel) 0.037709

KPSS Test Equation

Dependent Variable: LN_INDPROD

Method: Least Squares

Date: 12/04/11 Time: 22:22

Sample: 1972M01 2008M12

Included observations: 444 Coefficient Std. Error t-Statistic Prob.

C 3.683433 0.004892 752.9661 0.0000

@TREND(1972M01) 0.002156 1.91E-05 112.7973 0.0000

R-squared 0.966427 Mean dependent var 4.161030

Adjusted R-squared 0.966351 S.D. dependent var 0.281439

S.E. of regression 0.051626 Akaike info criterion -3.085072

Sum squared resid 1.178057 Schwarz criterion -3.066622

Log likelihood 686.8860 Hannan-Quinn criter. -3.077796

F-statistic 12723.23 Durbin-Watson stat 0.020988

Prob(F-statistic) 0.000000

FIGURE XV – Correlogram: Industrial production Index

FIGURE XVI – Industrial Production – ADF Test and KPPS Test with Constant and Trend

Page 29: Financial Eco No Metrics Report

-.05

-.04

-.03

-.02

-.01

.00

.01

.02

.03

1975 1980 1985 1990 1995 2000 2005

D_LN_INDPROD

0

10

20

30

40

50

60

70

80

-0.0375 -0.0250 -0.0125 0.0000 0.0125

Series: D_LN_INDPROD

Sample 1972M01 2008M12

Observations 443

Mean 0.001817

Median 0.002378

Maximum 0.021455

Minimum -0.042261

Std. Dev. 0.007471

Skewness -1.286933

Kurtosis 8.517067

Jarque-Bera 684.1177

Probability 0.000000

FIGURE XVII – Graph and Descriptive Statics Industrial Production Differential

FIGURE XVIII – Correlogram: Industrial Production Diferencial

Page 30: Financial Eco No Metrics Report

Null Hypothesis: D_LN_INDPROD has a unit root

Exogenous: Constant

Lag Length: 2 (Automatic based on SIC, MAXLAG=12) t-Statistic Prob.*

Augmented Dickey-Fuller test statistic -7.092998 0.0000

Test critical values: 1% level -3.444991

5% level -2.867889

10% level -2.570216

*MacKinnon (1996) one-sided p-values.

Augmented Dickey-Fuller Test Equation

Dependent Variable: D(D_LN_INDPROD)

Method: Least Squares

Date: 12/04/11 Time: 22:25

Sample (adjusted): 1972M05 2008M12

Included observations: 440 after adjustments Coefficient Std. Error t-Statistic Prob.

D_LN_INDPROD(-1) -0.419935 0.059204 -7.092998 0.0000

D(D_LN_INDPROD(-1)) -0.344922 0.058782 -5.867807 0.0000

D(D_LN_INDPROD(-2)) -0.171270 0.048262 -3.548775 0.0004

C 0.000678 0.000342 1.983102 0.0480

R-squared 0.368945 Mean dependent var -8.21E-05

Adjusted R-squared 0.364603 S.D. dependent var 0.008509

S.E. of regression 0.006783 Akaike info criterion -7.139882

Sum squared resid 0.020057 Schwarz criterion -7.102729

Log likelihood 1574.774 Hannan-Quinn criter. -7.125225

F-statistic 84.96888 Durbin-Watson stat 1.980792

Prob(F-statistic) 0.000000

Null Hypothesis: D_LN_INDPROD is stationary

Exogenous: Constant

Bandwidth: 12 (Newey-West using Bartlett kernel) LM-Stat.

Kwiatkowski-Phillips-Schmidt-Shin test statistic 0.121693

Asymptotic critical values*: 1% level 0.739000

5% level 0.463000

10% level 0.347000

*Kwiatkowski-Phillips-Schmidt-Shin (1992, Table 1)

Residual variance (no correction) 5.57E-05

HAC corrected variance (Bartlett kernel) 0.000178

KPSS Test Equation

Dependent Variable: D_LN_INDPROD

Method: Least Squares

Date: 12/04/11 Time: 22:25

Sample (adjusted): 1972M02 2008M12

Included observations: 443 after adjustments Coefficient Std. Error t-Statistic Prob.

C 0.001817 0.000355 5.117352 0.0000

R-squared 0.000000 Mean dependent var 0.001817

Adjusted R-squared 0.000000 S.D. dependent var 0.007471

S.E. of regression 0.007471 Akaike info criterion -6.953199

Sum squared resid 0.024674 Schwarz criterion -6.943958

Log likelihood 1541.133 Hannan-Quinn criter. -6.949554

Durbin-Watson stat 1.288710

.03

.04

.05

.06

.07

.08

.09

.10

.11

1975 1980 1985 1990 1995 2000 2005

US Unemployment Rate - Monthly

0

10

20

30

40

50

60

0.0375 0.0500 0.0625 0.0750 0.0875 0.1000

Series: UNEM

Sample 1972M01 2008M12

Observations 444

Mean 0.061547

Median 0.058000

Maximum 0.108000

Minimum 0.038000

Std. Dev. 0.014032

Skewness 0.832793

Kurtosis 3.547594

Jarque-Bera 56.86972

Probability 0.000000

FIGURE XIX - Industrial Production DIFFERENTIAL – ADF Test and KPSS Test with Constant

ANALYSIS OF THE SERIES US UNEMPLOYMENT

FIGURE XX - Graph and Descriptive Statics– US Unemployment Rate

Page 31: Financial Eco No Metrics Report

Null Hypothesis: UNEM has a unit root

Exogenous: Constant

Lag Length: 4 (Automatic based on SIC, MAXLAG=12) t-Statistic Prob.*

Augmented Dickey-Fuller test statistic -2.704203 0.0741

Test critical values: 1% level -3.445025

5% level -2.867904

10% level -2.570224

*MacKinnon (1996) one-sided p-values.

Augmented Dickey-Fuller Test Equation

Dependent Variable: D(UNEM)

Method: Least Squares

Date: 12/04/11 Time: 22:28

Sample (adjusted): 1972M06 2008M12

Included observations: 439 after adjustments Coefficient Std. Error t-Statistic Prob.

UNEM(-1) -0.015537 0.005746 -2.704203 0.0071

D(UNEM(-1)) 0.017935 0.047516 0.377457 0.7060

D(UNEM(-2)) 0.229892 0.046646 4.928450 0.0000

D(UNEM(-3)) 0.208211 0.047038 4.426463 0.0000

D(UNEM(-4)) 0.145538 0.048087 3.026576 0.0026

C 0.000985 0.000362 2.717447 0.0068

R-squared 0.153888 Mean dependent var 3.64E-05

Adjusted R-squared 0.144117 S.D. dependent var 0.001804

S.E. of regression 0.001669 Akaike info criterion -9.939682

Sum squared resid 0.001206 Schwarz criterion -9.883857

Log likelihood 2187.760 Hannan-Quinn criter. -9.917657

F-statistic 15.75047 Durbin-Watson stat 2.009683

Prob(F-statistic) 0.000000

Null Hypothesis: UNEM is stationary

Exogenous: Constant

Bandwidth: 16 (Newey-West using Bartlett kernel) LM-Stat.

Kwiatkowski-Phillips-Schmidt-Shin test statistic 0.998462

Asymptotic critical values*: 1% level 0.739000

5% level 0.463000

10% level 0.347000

*Kwiatkowski-Phillips-Schmidt-Shin (1992, Table 1)

Residual variance (no correction) 0.000196

HAC corrected variance (Bartlett kernel) 0.002992

KPSS Test Equation

Dependent Variable: UNEM

Method: Least Squares

Date: 12/04/11 Time: 22:29

Sample: 1972M01 2008M12

Included observations: 444 Coefficient Std. Error t-Statistic Prob.

C 0.061547 0.000666 92.42497 0.0000

R-squared 0.000000 Mean dependent var 0.061547

Adjusted R-squared 0.000000 S.D. dependent var 0.014032

S.E. of regression 0.014032 Akaike info criterion -5.692742

Sum squared resid 0.087222 Schwarz criterion -5.683517

Log likelihood 1264.789 Hannan-Quinn criter. -5.689104

Durbin-Watson stat 0.016383

FIGURE XXI – Correlogram: US unemployment Rate

FIGURE XXII – US Unemployment Rate – ADF Test and KPSS Test with Constant

Page 32: Financial Eco No Metrics Report

FIGURE XXIII – Correlogram of the Regression of US FTSE returns on 10-years yields’ differential

TABLE 4 – Regression of US FTSE Returns on the Explanatory Variables

Dependent Variable: LN_FTSE_R

Method: Least Squares

Date: 12/03/11 Time: 10:19

Sample (adjusted): 1972M02 2008M12

Included observations: 443 after adjustments

White Heteroskedasticity-Consistent Standard Errors & Covariance Coefficient Std. Error t-Statistic Prob.

C 0.004857 0.003430 1.416232 0.1574

D_I10 -0.032386 0.006813 -4.753683 0.0000

D_LN_INDPROD 0.715542 0.929921 0.769465 0.4420

UNEM_GR -0.487517 2.065967 -0.235975 0.8136

R-squared 0.058292 Mean dependent var 0.006424

Adjusted R-squared 0.051856 S.D. dependent var 0.050559

S.E. of regression 0.049231 Akaike info criterion -3.175614

Sum squared resid 1.063983 Schwarz criterion -3.138652

Log likelihood 707.3986 Hannan-Quinn criter. -3.161037

F-statistic 9.058045 Durbin-Watson stat 1.853949

Prob(F-statistic) 0.000008

Page 33: Financial Eco No Metrics Report

TABLE 5 – Test-Statistics for Stationarity Tests

*All regressions have been done, using White Heteroscedasticity-Consistent Standard Errors & Covariance ** Terms significance tested to 10% Significance Value.

TABLE 6 – Base-Model Regression

Model Orders Adjusted R-

Squared AIC SIC Insignificant Variables

ARMA(1;1) 0.043428 -3.164493 -3.127467 AR(1); AR(2)

ARMA(2;2) 0.066977 -3.182659 -3.127025 C; AR(1); MA(1)

ARMA(3;3) 0.084728 -3.195504 -3.121199 C

ARMA(4;4) 0.114840 -3.201451 -3.108410 C; AR(1); AR(2); AR(3);

MA(2); MA(3)

ARMA(1;2) 0.041247 -3.159976 -3.113694 AR(1);MA(1); MA(2)

ARMA(2;1) 0.054006 -3.171091 -3.124730 AR(2)

ARMA(3;2) 0.059831 -3.170899 -3.105882 C; AR(1); AR(2); AR(3);

MA(1) MA(2)

ARMA(2;3) 0.059142 -3.172062 -3.107157 C; AR(1); AR(2); MA(1)

MA(2); MA(3)

ARMA(4;3) 0.084020 -3.190214 -3.106477 C; AR(4)

ARMA(3;4) 0.071418 -3.178839 -3.095246 C; MA(4)

Dependent Variable: LN_FTSE_R

Method: Least Squares

Date: 12/03/11 Time: 12:15

Sample (adjusted): 1972M05 2008M12

Included observations: 440 after adjustments

Convergence achieved after 51 iterations

White Heteroskedasticity-Consistent Standard Errors & Covariance

MA Backcast: 1972M02 1972M04 Coefficient Std. Error t-Statistic Prob.

C 0.004780 0.006646 0.719184 0.4724

D_I10 -0.030036 0.005976 -5.026283 0.0000

AR(1) 0.819383 0.077238 10.60856 0.0000

AR(2) -0.859308 0.033825 -25.40427 0.0000

AR(3) 0.919593 0.074527 12.33906 0.0000

MA(1) -0.752423 0.101634 -7.403264 0.0000

MA(2) 0.831333 0.062602 13.27965 0.0000

MA(3) -0.816908 0.086958 -9.394252 0.0000

R-squared 0.099322 Mean dependent var 0.006491

Adjusted R-squared 0.084728 S.D. dependent var 0.050720

S.E. of regression 0.048524 Akaike info criterion -3.195504

Sum squared resid 1.017176 Schwarz criterion -3.121199

Log likelihood 711.0109 Hannan-Quinn criter. -3.166191

F-statistic 6.805529 Durbin-Watson stat 1.999248

Prob(F-statistic) 0.000000

Inverted AR Roots .94 -.06+.99i -.06-.99i

Inverted MA Roots .87 -.06+.97i -.06-.97i

Page 34: Financial Eco No Metrics Report

Ramsey RESET Test:

F-statistic 5.490878 Prob. F(3,429) 0.0010

Log likelihood ratio 16.57872 Prob. Chi-Square(3) 0.0009

Test Equation:

Dependent Variable: LN_FTSE_R

Method: Least Squares

Date: 12/03/11 Time: 12:37

Sample: 1972M05 2008M12

Included observations: 440

Convergence achieved after 36 iterations

White Heteroskedasticity-Consistent Standard Errors & Covariance

MA Backcast: 1972M02 1972M04 Coefficient Std. Error t-Statistic Prob.

C 0.010110 0.005309 1.904307 0.0575

D_I10 -0.024872 0.009314 -2.670377 0.0079

FITTED^2 -27.45377 13.42400 -2.045125 0.0415

FITTED^3 650.9897 394.0615 1.652000 0.0993

FITTED^4 60.43975 8399.448 0.007196 0.9943

AR(1) -0.018031 0.089570 -0.201311 0.8406

AR(2) -0.049965 0.086833 -0.575417 0.5653

AR(3) 0.904243 0.085577 10.56639 0.0000

MA(1) 0.055564 0.114006 0.487382 0.6262

MA(2) 0.126752 0.109234 1.160375 0.2465

MA(3) -0.882567 0.114596 -7.701520 0.0000

R-squared 0.132627 Mean dependent var 0.006491

Adjusted R-squared 0.112409 S.D. dependent var 0.050720

S.E. of regression 0.047785 Akaike info criterion -3.219547

Sum squared resid 0.979563 Schwarz criterion -3.117377

Log likelihood 719.3002 Hannan-Quinn criter. -3.179241

F-statistic 6.559703 Durbin-Watson stat 1.982911

Prob(F-statistic) 0.000000

Inverted AR Roots .94 -.48-.85i -.48+.85i

Inverted MA Roots .90 -.48-.87i -.48+.87i

TABLE 7 –Breusch-Godfrey Serial Correlation Test to the Base Model

TABLE 8 – Ramsey RESET Test to Non-Linearity

Breusch-Godfrey Serial Correlation LM Test:

F-statistic 0.267999 Prob. F(3,429) 0.8485

Obs*R-squared 0.822584 Prob. Chi-Square(3) 0.8441

Test Equation:

Dependent Variable: RESID

Method: Least Squares

Date: 12/03/11 Time: 12:30

Sample: 1972M05 2008M12

Included observations: 440

Presample missing value lagged residuals set to zero. Coefficient Std. Error t-Statistic Prob.

C 6.82E-05 0.005373 0.012697 0.9899

D_I10 -0.000501 0.006397 -0.078243 0.9377

AR(1) -0.014282 0.070016 -0.203983 0.8385

AR(2) -0.002178 0.031989 -0.068079 0.9458

AR(3) -0.009724 0.067217 -0.144665 0.8850

MA(1) 0.040730 0.115341 0.353127 0.7242

MA(2) 0.000334 0.053924 0.006194 0.9951

MA(3) 0.016791 0.105979 0.158440 0.8742

RESID(-1) -0.041834 0.078712 -0.531481 0.5954

RESID(-2) -0.031036 0.074404 -0.417124 0.6768

RESID(-3) 0.030455 0.069235 0.439880 0.6602

R-squared 0.001870 Mean dependent var -5.05E-05

Adjusted R-squared -0.021397 S.D. dependent var 0.048136

S.E. of regression 0.048648 Akaike info criterion -3.183740

Sum squared resid 1.015273 Schwarz criterion -3.081571

Log likelihood 711.4228 Hannan-Quinn criter. -3.143434

F-statistic 0.080352 Durbin-Watson stat 1.968838

Prob(F-statistic) 0.999935

Page 35: Financial Eco No Metrics Report

-.4

-.2

.0

.2

.4

-.4

-.2

.0

.2

.4

1975 1980 1985 1990 1995 2000 2005

Residual Actual Fitted

FIGURE XXIV – Histogram on the Residuals of the Base Model

FIGURE XXV – Graph of the Base Model

Page 36: Financial Eco No Metrics Report

FIGURE XXVI – Conditional Variance

TABLE 9 – ARCH Models

Model’s Orders Log Likelihood AIC Insignificant Variables

ARCH (1) 730.2019 -3.2736 AR(1); AR(2); MA(1); MA(2); MA(3)

ARCH (2) 743.9987 -3.3318 NONE

ARCH(3) 757.0368 -3.3865 AR(1); AR(3); MA(1); MA(3)

ARCH (4) 759.4547 -3.3930 AR(1); AR(2); MA(1); MA(2); MA(3)

Likelihood Test [relatively to the simplest ARCH (1)]: H0: variables added are not significant / models are indifferent in conclusions

H1: variables added are significant / distance between two models is significant

Likelihood Ratio (2) (3) (4)

Lr 730.2019 730.2019 730.2019

Lu 743.9987 757.0368 759.4547

LR 27.5936 53.6698 58.5056

m 1 2 3

Chi-sq (1%) 6.64 9.21 11.35

Chi-sq (5%) 3.84 5.99 7.82

Chi-sq (10%) 2.71 4.61 6.25

Page 37: Financial Eco No Metrics Report

TABLE 10 – GARCH Models

Model’s Order Log Likelihood AIC SIC Insignificant Variables

GARCH (1,2) 21.869 -3.451024 -3.339567 ALL (exp d_i10)

GARCH (2,1) 13.0614 -3.431007 -3.319549 AR(1); AR(3); MA(1); MA(3)

GARCH (3,3) 30.6978 -3.457453 -3.318131 ALL (exp d_i10)

GARCH (4,3) 35.0162 -3.462722 -3.314112 AR(1); AR(2); MA(1); MA(2); MA(3)

GARCH (5,4) 34.1554 -3.451675 -3.284489 AR(1); AR(2); MA(1); MA(2); MA(3)

Likelihood Test [relatively to the most simple GARCH (1,1)]: H0: variables added are not significant / models are indifferent in conclusions

H1: variables added are significant / distance between two models is significant

Likelihood Ratio (1,2) (2,1) (3,3) (4,3) (5,4)

Lr 760.2908 760.2908 760.2908 760.2908 760.2908

Lu 771.2253 766.8215 775.6397 777.7989 777.3685

LR 21.869 13.0614 30.6978 35.0162 34.1554

m 1 1 4 5 7

Chi-sq (1%) 6.64 6.64 13.28 15.09 18.48

Chi-sq (5%) 3.84 3.84 9.49 11.07 14.07

Chi-sq (10%) 2.71 2.71 7.78 9.24 12.02

FIGURE XXVII – GARCH Best Model Output

Page 38: Financial Eco No Metrics Report

TABLE 11 – EGARCH Models

Model’s Orders Log Likelihood AIC SIC Insignificant Variables

EGARCH (1,1) 0 -3.423291 -3.311833 NONE (exp C)

EGARCH (3,3) 31.482 -3.476659 -3.328049 NONE

EGARCH (4,3) 73.6716 -3.567999 -3.410101 NONE

EGARCH (3,4) 41.209 -3.494220 -3.336322 NONE

EGARCH (5,5) 98.7922 -3.611455 -3.425692 NONE

EGARCH (1,1) (3,3) (4,3) (3,4) (5,5)

Lr 765.124 765.124 765.124 765.124 765.124

Lu 765.124 780.865 801.9598 785.7285 814.5201

LR 0 31.482 73.6716 41.209 98.7922

m 0 4 5 5 8

Chi-sq (1%) - 13.28 15.09 15.09 20.09

Chi-sq (5%) - 9.49 11.07 11.07 15.51

Chi-sq (10%) - 7.78 9.24 9.24 13.36

FIGURE XXVIII – EGARCH Best Model Output

Page 39: Financial Eco No Metrics Report

TABLE 12 – TGARCH Models

Model’s orders Log Likelihood AIC Insignificant Variables

TGARCH (1,1) 0 -3.457813 AR(1); AR(2); MA(1); MA(2)

TGARCH (2,2) 2.7508 -3.454974 NONE

TGARCH (3,2) 3.2274 -3.451511 AR(2); MA(2)

TGARCH (4,3) 23.4628 -3.488410 AR(1); AR(2); MA(2); MA(3)

TGARCH (4,4) 17.2008 -3.469633 NONE

TGARCH (1,1) (2,2) (3,2) (4,3) (4,4)

Lr 772.7188 772.7188 772.7188 772.7188 772.7188

Lu 772.7188 774.0942 774.3325 784.4502 781.3192

LR 0 2.7508 3.2274 23.4628 17.2008

m 0 2 3 5 6

Chi-sq (1%) - 9.21 11.34 15.09 16.81

Chi-sq (5%) - 5.99 7.82 11.07 12.59

Chi-sq (10%) - 4.6 6.25 9.24 10.64

FIGURE XXIX – Best TGARCH Model Output

Page 40: Financial Eco No Metrics Report

TABLE 13 – EGARCH (p, q)-M with Conditional Variance

Log-Likelihood AIC Insignificant Parameters

EGARCH-M(1,1) 765,992 -3,422691 VAR; C; AR(2); MA(2)

EGARCH-M(2,2) 764,8471 -3,408396 VAR; C; AR(1); AR(2); MA(1); MA(2)

EGARCH-M(3,3) 813,7686 -3,621675 None

EGARCH-M(4,3) 816,2783 -3,628538 None

EGARCH-M(4,4) 809,5717 -3,593508 VAR; AR(2); MA(2)

TABLE 14 - EGARCH (p, q)-M with Logarithm of the Conditional Variance

Log-Likelihood AIC Insignificant Parameter

EGARCH-M(2,2) 763,7074 -3,403215

Log-VAR; C; AR(1); AR(3); MA(1); MA(3)

EGARCH-M(3,3) 783,7448 -3,485204 None

EGARCH-M(4,3) 782,5716 -3,475326

Log-VAR; C; AR(2); AR(3); MA(2); MA(3)

EGARCH-M(4,4) 803,3738 -3,565335 C; AR(2); AR(3); MA(2); MA(3)

EGARCH-M(3,4) 796,0647 -3,536658 AR(2); MA(2)

TABLE 15 – EGARCH (p, q)-M with Conditional Standard Deviation

TABLE 16 – EGARCH (p, q)-M

Log-Likelihood AIC Insignificant Parameters

EGARCH-M(2,3) 806,1241 -3,591473 AR(1); AR(2); MA(1); MA(2)

EGARCH-M(3,3) 811,8345 -3,612884 None

EGARCH-M(4,3) 798,7763 -3,548983 AR(1);AR(3); MA(1); MA(3)

EGARCH-M(4,4) 814,562 -3,616191 AR(2)

EGARCH-M(5,5) 813,6175 -3,602807 None

Log-Likelihood AIC Insignificant Parameters

EGARCH-M-VAR (4,3) 816,2783 -3,628538 None

EGARCH-M-Log-VAR(3,4)

796,0647 -3,536658 AR(2); MA(2)

EGARCH-M-STD (3,3) 811,8345 -3,612884 None

Page 41: Financial Eco No Metrics Report

TABLE 17 – Forecasting Criteria for volatility models

MODELS ARCH(2) GARCH(4;3) EGARCH(5;5) TGARCH(4;3) EGARCH-M(4;3)

CRITERIAS

MSE 0.0189 0.0183 0.0148 0.0185 0.0184

MAE 0.0834 0.0821 0.0773 0.0825 0.0825

MAPE 139.5539 106.7275 283.1547 92.8891 105.4968