bayesian filtering on realised, bipower and option … · university of new south wales bayesian...

70
University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson Qu Supervisors: Dr Chris Carter Dr Valentyn Panchenko

Upload: hoanganh

Post on 06-Sep-2018

216 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

University of New South Wales

Bayesian Filtering on Realised,Bipower and Option Implied

Volatility

Honours Student:Nelson Qu

Supervisors:Dr Chris Carter

Dr Valentyn Panchenko

Page 2: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

1 Declaration

I hereby declare that this submission is my own work and to the best ofmy knowledge it contains no material previously written by another person,or material which to a substantive extent has been accepted for the awardof any other degree or diploma of a university or other institute of higherlearning, except where referenced in the text.

I also declare that the intellectual content of this thesis is the productof my own work, and any assistance that I have received in preparing theproject, writing the program as well as presenting the thesis, has been dulyacknowledged.

−−−−−−−−−−−Nelson Qu

1

Page 3: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

Contents

1 Declaration 1

2 Acknowledgements 5

3 Abbreviations 5

4 Abstract 6

5 Introduction 7

6 Literature Review 126.1 Stochastic Volatility Models . . . . . . . . . . . . . . . . . . . 126.2 Volatility Stylised Facts . . . . . . . . . . . . . . . . . . . . . 146.3 Kalman Filters . . . . . . . . . . . . . . . . . . . . . . . . . . 156.4 Markov Chain Monte Carlo Methods(MCMC) . . . . . . . . . 166.5 Sequential Monte Carlo Methods/ Particle Filtering . . . . . . 18

7 Models 227.1 Stochastic Volatility Model (SVM) . . . . . . . . . . . . . . . 237.2 Realized Volatility . . . . . . . . . . . . . . . . . . . . . . . . 247.3 Bipower Volatility . . . . . . . . . . . . . . . . . . . . . . . . . 267.4 VIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

8 Data 298.1 Data transformation . . . . . . . . . . . . . . . . . . . . . . . 31

9 Method 349.1 MCMC Method . . . . . . . . . . . . . . . . . . . . . . . . . . 34

9.1.1 Outline of MCMC Method . . . . . . . . . . . . . . . . 349.2 Particle Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

9.2.1 Outline of Particle Filter Methods . . . . . . . . . . . . 359.2.2 Liu and West Filter . . . . . . . . . . . . . . . . . . . . 369.2.3 Particle Learning Filter . . . . . . . . . . . . . . . . . . 37

9.3 Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389.3.1 How observation errors are distributed . . . . . . . . . 39

10 Results 4110.1 Marginal Log Likelihoods . . . . . . . . . . . . . . . . . . . . . 4110.2 Sequential Bayes Factor . . . . . . . . . . . . . . . . . . . . . 4610.3 Parameter Estimation . . . . . . . . . . . . . . . . . . . . . . 48

2

Page 4: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

10.4 Forecast . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

11 Robustness and Analysis 5611.1 Prior Sensitivity . . . . . . . . . . . . . . . . . . . . . . . . . . 5611.2 Prior distribution difficulty for Option Implied Volatility . . . 5711.3 Number of iterations . . . . . . . . . . . . . . . . . . . . . . . 5811.4 Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

12 Conclusion 60

13 Appendix 6213.1 Prior Specifications . . . . . . . . . . . . . . . . . . . . . . . . 6213.2 Particle Learning Filter . . . . . . . . . . . . . . . . . . . . . . 6313.3 Liu and West Filter . . . . . . . . . . . . . . . . . . . . . . . . 65

List of Figures

1 Difference in Realised and Bipower volatility of S&P500 Index 272 Graph of Data . . . . . . . . . . . . . . . . . . . . . . . . . . . 323 QQ plots of Data and Log Data . . . . . . . . . . . . . . . . . 334 Sequential Bayes Factor . . . . . . . . . . . . . . . . . . . . . 475 Sequential Bayes Factor . . . . . . . . . . . . . . . . . . . . . 476 Sequential bayes factor . . . . . . . . . . . . . . . . . . . . . . 487 Liu and West: Latent State of Realised Volatility Model B . . 498 Particle Learning: Latent State of Realised Volatility Model B 499 MCMC: Latent State of Realised Volatility Model B . . . . . . 5010 Liu and West: Latent State of Bipower Volatility Model B . . 5011 Particle Learning: Latent State of Realised Volatility Model B 5112 MCMC: Latent State of Bipower Volatility Model B . . . . . . 5113 Liu and West: Latent State of Option Volatility Model B . . . 5214 Particle Learning: Latent State of Realised Volatility Model B 5215 MCMC: Latent State of Option Volatility Model B . . . . . . 5216 Probabilistic Forecast of Realised Volatility . . . . . . . . . . . 5317 Probabilistic Forecast of Bipower Volatility . . . . . . . . . . . 5418 Probabilistic Forecast of Option Implied Volatility . . . . . . . 5419 Particle Learning: Parameter history of Realised Volatility

Model B . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6320 Particle Learning: Parameter history of Bipower Volatility

Model B . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6321 Particle Learning: Parameter history of Option Volatility Model

B . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

3

Page 5: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

22 Liu and West: Parameter history of Realised Volatility Model B 6523 Liu and West: Parameter history of Bipower Volatility Model B 6524 Liu and West: Parameter history of Option Volatility Model B 66

List of Tables

1 Summary of S&P 500 data . . . . . . . . . . . . . . . . . . . . 312 Transformed data . . . . . . . . . . . . . . . . . . . . . . . . . 313 Marginal Log Likelihood for Realised Volatility . . . . . . . . 434 Marginal Log Likelihood for Bipower Volatility . . . . . . . . . 445 Marginal Log Likelihood for Option Implied Volatility . . . . . 456 Realised volatility: Posterior estimates of parameters . . . . . 487 Bipower volatility: Posterior estimates of parameters . . . . . 508 Option Implied volatility: Posterior estimates of parameters . 519 Forecast deviation measures . . . . . . . . . . . . . . . . . . . 5310 Marginal Likelihood sensitivity to changes in Prior . . . . . . 57

4

Page 6: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

2 Acknowledgements

My sincere gratitude goes to both Dr Valentyn Panchenko and Dr Chris

Carter for their supervision and assistance in helping my understanding in

the bayesian filtering literature field, without their guidance I do not think

I would have been able to finish my honours year. Dr Gael Martin and

Worapree Maneesoonthorn from Monash University for their willingness to

share their dataset and their helpful suggestions in modelling volatility. My

gratitude towards Dr Christopher Strickland from Queensland University of

Technology for his help on PythonTM . I would also like to thank my parents

for their continued support.

3 Abbreviations

LEGENDS:

LW= Liu and west filter

PL=Particle Learning Filter

MCMC=Markov chain Monte Carlo

t = student t-distributed observation errors

A, B, C, D, E and F = models A, B, C, D, E and F respectively

Example:

PL-C t=Particle learning filter for model c with student t-dstributed obser-

vation errors

LW-B = Liu and West filter for model B with normally distributed observa-

tion errors

5

Page 7: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

4 Abstract

Volatility plays an important role specifically in the finance sector it is used

for pricing securities and managing portfolio risk. We will explore three dif-

ferent Bayesian filter methods developed over the last two decades which

are the forward filtering backward sampling (MCMC) algorithm (1994), the

Liu and West filter (2001) and the particle learning algorithm (2010). Using

these three filtering methods we will apply these methods on three variance

measures of the S&P 500 index the realised, bipower and option implied

volatility to see how to Bayesian filters fare in filtering real world variance

measures to determine the integrated variation (the unobserved/true volatil-

ity state of the S&P 500). However stochastic volatility is only one important

topic in Economics, another is model selection and forecasting so at the same

time we will explore stochastic volatility selection and how the particle filter

performs in forecasting in a latest financial crisis(2007).

6

Page 8: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

5 Introduction

Bayesian filtering is a recursive method to solve a dynamic model by filtering

out observation and state transition noises to determine an unobserved latent

state variable, these unobserved latent state variables can be the true state of

volatility or the Non-Accelerating Inflation Rate of Unemployment (NAIRU).

The current Bayesian filtering methods are in the category of the Kalman

filter, Markov chain Monte Carlo method and the Particle filter. There are

many problems where the Bayesian filtering is applied to solve real world

problems such as tracking, GPS, robotics and stochastic volatility modelling.

The last example which is stochastic volatility modelling will be explored in

this paper particular on the intention of model selection and the effectiveness

and efficiency of the Markov chain Monte Carlo compared to the particle

filter. Another difference in this paper is unlike many previous papers such as

Kim et al (1998)[24], Johannes and Polson(2002) [21] and Nakajima(2010) we

will investigate filtering three different variance measures realised volatility,

bipower volatility and the option implied volatility of the S&P 500 index to

determine the integrated variance of the S&P 500 index.

Why are we interested in the latent state when we already have non-

parametric measures of volatility like the standard variance formula or the

realised variance formula. The motivation for the study of latent volatil-

ity state is that there are observation errors in real data collection as such

it transfers to the non-parametric measures of variation so we can either

underestimate or overestimate volatility in a time period. This becomes a

problem particularly in the business sector where volatility is used to account

for portfolio risk, business forecasting and to price financial derivatives such

as futures and options. As such the latent volatility state can be described as

the true volatility state of the variation measure of the time series when we

filtered out the observation and state transition errors via a Bayesian filtering

method.

The literature in stochastic volatility modelling began as early as 1986

7

Page 9: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

by Taylor[31], Taylor in 1986 first suggested to solve the stochastic volatility

problem by adjusting the dynamic model to be linear as such via the Kalman

filter we can approximately filter the latent volatility state from the returns of

the security prices. Since the model was adjusted to be linear the estimated

results were not the best maximised likelihood estimate. The Kalman filter

required both the dynamic model to be linear and the observation and state

transition errors to be normally distributed to find the best linear estimator

as such it was restrictive method for data filtering.

Jacquier, Polson and Rossi (2004) [19] and Kim, Shephard and Chib

(1998)[24] are one of first papers to apply Markov chain Monte Carlo (MCMC)

to recursively solve for the latent volatility state for stochastic volatility mod-

els, in both papers they showed that the MCMC method is able to solve more

general dynamic models that are non-linear and are not normally distributed.

With the ability to model more general models, development in the litera-

ture occurred to try to filter volatility jumps and leverage. Shephard and

Pitt(1999) later applied stochastic volatility modelling through the particle

filter developed by Gordon, Salmond and Smith (1993). The particle filter is

the same as the MCMC method as it is possible to adjust the problem for the

particle filter to solve. Over the last two decade most papers in stochastic

volatility filtering looked at the observation errors as normally distributed

models however only recently such as Nakajima (2010) looked at fat-tailed

distributed models like the student t-distribution for the observation errors

using the Markov chain Monte Carlo method.

There are a variety of models available in the stochastic volatility mod-

elling literature field however this can become a complicated process if we

are unsure which models to apply in certain situations. However model se-

lection is not explored thoroughly in this area of research as the literature

in filtering has only been recent. Therefore it is important to look at model

selection as current literature in the field of stochastic volatility modelling

do not look at model selection and hence usually papers do not justify their

selected model so in this paper we will explore the reason of the different

8

Page 10: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

volatility models and the method to select different volatility models. Since

we are applying the a Bayesian filter to determine the latent volatility state

in the S&P 500 index we would have the benefit of many Bayesian statisti-

cal tools for model comparison such as the marginal log-likelihood and the

sequential Bayes factor.

The Markov Chain Monte Carlo method is a recursive method to solve dy-

namic model problems, Carter and Kohn (1994) [8] and Furthwirth-Schnatter

(1994) [14] developed the forward filtering and Backward sampling algorithm.

The forward filtering backward sampling algorithm can be described as we

start with the fixed parameters which we will use to recursively solve a sys-

tem. This creates one path of the behaviour of the latent state but the

parameters could be wrong so then the parameters are resampled according

to either Gibbs-Sampling algorithm or the Metropolis-Hastings algorithm to

get new parameters and then it is repeated to solve for the latent state.

The sequential monte carlo/particle filter as stated before was developed by

Gordon, Salmond and Smith (1993) [16] it is similar to the MCMC method

however instead of performing one path of estimation of the latent volatility

state we perform multiple paths of the latent state but for only one time

step. We later resample the each path and keep the good paths and drop

the bad paths. This is where the name sequential Monte Carlo comes from

as the states are learnt over each time step, we will sample the states and

parameters for the next time step then resample them when we have new

observations. In most cases the particle filter will perform much faster than

the MCMC method however the particle filter may requires a large size of

particles/paths to be able to perform consistent and efficient estimates of the

state and parameters. Therefore to sample the posterior distribution of the

system via a particle filter it may require a highly intensive memory machine

to get very consistent posterior estimates.

Investigating the S&P 500 from 2 July 1996 to 31st December 2008 we

will have modelling the latent state over multiple Financial crises such as the

1997 Asian Financial Crisis, the 2000 Dot-Com bubble burst and the recent

9

Page 11: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

2007 Global financial Crisis. We will consider three different variance mea-

sures of the S&P 500 which are Realised volatility, Bipower volatility and

the Option implied volatility. The realised volatility is describe as historical

returns of a financial security for a period of time. The bipower volatility is

defined as the adjacent intraday returns of a financial security and an inter-

esting property between bipower volatility and realised volatility is that the

difference between the distribution of the two volatility measures represents

volatility jumps. The option implied volatility is from the spot based volatil-

ity index (VIX) from the Chicago Board of Option Exchange (CBOE). We

should expect that the latent volatility state from both realised volatility,

bipower volatility and the option implied volatility to be the same since they

are all measuring the same financial security. However it is not the case,

both realised and Bipower volatility do share the same parameter estimates

and latent state movement however the Option implied volatility behaves

differently to the other two variance measures.

After modelling for our three volatility measurements we can say that

a parsimonious model such as a local level model can be as effective as a

more complicated non-linear Heston stochastic volatility model. Through

the sequential bayes factor and the marginal log-likelihood the best model

according to these results suggests that model B which is an autoregressive

one process to be able to capture the behaviour of the variance measures.

This implies that behaviour between the volatility measures are similar even

though their numbers are different we can also say autoregressive of order one

process volatility measures are very similar to the true integrated variation,

therefore proving that Anderson et al [2] studies are true.

Using the particle filter we generated the predictive forecast of the three

variance measures over the volatile period in the 2008 if you look in section

10. It is fair to say that the predictive probabilistic forecast covers almost

all the real world variance movement.

With the limited amount of comparison tool available for model selection

10

Page 12: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

for particle filters we made further checks to see if our results are consistent.

Our robustness check illustrates that the priors of the parameter distributions

can play a big part in determining model selection but at the same time the

priors can affect the model’s fit. Though it did cause a lose of fit in the model

during our robustness check but it did not affect the rank of the preferred

models so we believe our results are consistent.

11

Page 13: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

6 Literature Review

6.1 Stochastic Volatility Models

In finance volatility plays an important role in managing portfolio risk,

derivative security pricing and analysing asset risks. As such development in

volatility modelling over the last three decades has been rich, with models

modelling stochastic/changing variances by methods of looking at conditional

variance and stochastic variance.

Two main methods for volatility modelling; the first popular method was

the autoregressive conditional heteroskedasticity(ARCH) model and general

autoregressive conditional heteroskedasticity(GARCH) model developed by

Engle [12], it is used to model for conditional variance. Conditional variance

is the variance calculated given that we have information of past returns. The

ARCH/GARCH models was successful in modelling time varying volatility

clustering a stylised fact of volatility. However volatility has a few more

stylised facts that were not successfully captured in ARCH/GARCH models

so further extensions of the ARCH/GARCH models were created to take

into account of leverage effects, long memory, persistency and Engle’s other

volatility stylised facts.

The other method for volatility modelling is via a Bayesian approach

which is explored in this paper. A Bayesian approach would be to evaluate

the likelihood/fit for unknown parameters, it may sound simple but this ap-

proach could be difficult as there are no exact forms of the likelihood function

and we would be required to integrate a t-dimensional integral (t periods of

observations). To be able to evaluate a difficult integral a method to approx-

imate the sampling distribution of the likelihood was developed and these in-

clude the sequential Monte Carlo and Markov Chain Monte Carlo methods.

Another method is application of a Quasi-Maximum likelihood estimation

via a Kalman filter to solve the stochastic volatility model however, since

12

Page 14: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

the Kalman filter only can solve for linear and gaussian distributed errors

the results that can be found in Harvey et al(1994) [17] says is that it only

yields a minimum mean square linear estimator(MMSLE) of the underlying

volatility not the minimum mean square estimators (MMSE). As a result,

the Kalman filter would work well if the series filtered was linear otherwise

it is not the best method to use for non-linear series.

In volatility modelling, we generally use terminologies like realised volatil-

ity, bipower volatility and quadratic variance. Realised volatility is defined as

the actual historical volatility of the security price and is usually represented

over a period of time. It is generally calculated as the intraday squared re-

turns over short periods such as 5 or 15 minutes. The bipower volatility is

defined as the actual historical volatility that is calculated as the product of

absolute value of returns over two intraday periods. Anderson et al(2007)

[2] was able to solve that to see if there is a large jump between two pe-

riods of returns, such as if returns were small for two periods the bipower

volatility would be much smaller but if returns over two periods were large

the bipower volatility value would be larger. An interesting property be-

tween realised volatility and bipower volatility is that if there is a presence of

volatility jumps, Barndorff and Shephard (2004)[4] says that the difference

between the two measures would represent the jump component.

Earliest proposal adjusting the stochastic volatility model was by Tay-

lor(1986) where the natural logarithm of volatility was modelled as a linear

autoregressive process of order one, in this form it was known as the ARSV

(Autoregressive Stochastic Volatility) model. An example of an ARSV(1)

model is shown below:

yt = σxεtσt

log(σ2t ) = φlog(σ2

t−1) + ηt, t=1,2,...,T-1

13

Page 15: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

6.2 Volatility Stylised Facts

In Engle and Patton (2001) [12] paper they explained that a good volatility

model is able to capture the stylised facts of volatility, the first fact is volatil-

ity persistence (i.e. generally volatility cluster together and move in the same

direction for extended periods so large movements in volatility will later lead

to more large movements in the future). The second stylised fact for volatil-

ity is mean reverting suggest that after some periods large movements in

volatility, the volatility state would always go back towards its usual(mean)

levels. The third fact, is the leverage effect that is when returns are negative

volatility tends to be larger but when returns are positive volatility tends to

be smaller.

Over the years development in the stochastic volatility models suggests

inclusion of a jump parameter as empirical evidence suggests there were

periods when there were positive jumps in volatility. Eraker et al (2003)

[13] says that there were evidence to show that there were volatility jumps

and jumps in returns. Return jumps can cause large movements resulting to

a market crash however volatility jumps can remain persistent which does

affect future returns. Eraker et al (2003) [13] suggests incorporating jumps

in the stochastic volatility model it allows us to better model rapid volatility

increases such as in market crashes and provides evidence to support other

literatures such as Bates(2000) and Pan (2002) that including jumps removes

model misspecification .

Recent papers such as Nakajima(2011)[29] and Malik and Pitt(2011) [26]

both explored GARCH models and filter methods to capture the leverage

effects, jumps and volatility persistence. They were able to show that on

average for most stochastic models they used the MCMC method performed

better than the GARCH counterpart, Nakajima(2011) also found that the

stochastic models that incorporated jumps, fat tails and leverage tend to do

much better than any other specification.

14

Page 16: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

6.3 Kalman Filters

Earlier methods to solve problems such as (i) prediction of random signals;

(ii) separation of random signals from random noise (Kalman R., 1960) [22]

were pioneered by Wiener (1949). However there were limitations as the

filter was determined by the impulse response functions consequently if our

problem is complex (large amount of observations or non-stationary series)

then it will become difficult to derive the impulse response function. Another

limitation was we cannot filter for non-stationary series as it was created with

the assumption that the noise and signal of the series are stationary.

In 1960, R. Kalman published his first extension to the Wiener filter for

solving dynamic models in a discrete time series allowing for non-stationary

time series and in the next year Kalman and Bucy (1961) published another

version which accounted for a continuous time series. The Kalman filter was a

recursive method for solving linear data problems that has either observation,

measurement noise or both. Welch (1995)[32] states that Ho and Lee (1964)

was able to determine that the Kalman filter was optimal Bayesian filter

under 3 assumptions for the time series i.e. linear, quadratic and Gaussian.

The restrictive assumptions for the time series to be able to obtain an

optimal Kalman filter is important as in this paper we would be filtering time

series for the underlying volatility and we know that stochastic volatility for

large data series can be multi-dimensional series. As a result it took many

years before a viable adaptation of a likelihood function to use as the filter

equation and in 1994, Harvey et al suggested the use of Quasi-Maximum

likelihood functions.

Further extensions of the Kalman filters were developed for either non-

linearity or non-gaussian data series such as the extended Kalman filter,

Gaussian sum filter and the unscented Kalman filter. However each new

method would still suffer problems if there were severe non-linearity and/or

non-Gaussianity, as such a different class of filters were developed using

15

Page 17: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

Monte Carlo (sampling) methods.

6.4 Markov Chain Monte Carlo Methods(MCMC)

The concept of applying a Monte Carlo approximation method to solve a dif-

ficult integral is suppose∫x

f(x)dP (x), if we can draw size N of identical and

independently distributed random samples of {x1, x2, ...xN} from the proba-

bility distribution P(x) then we can approximate f(x) by fN = 1N

N∑i=1

f(xi) .

From the concept of Strong Law of Large Numbers we can say that our esti-

mated value of the function fN would almost surely converge to the expected

value of the function E[f(x)]. However there arises two main problems; one

is in what method do we draw random samples from the probability distri-

bution and the second is what method are we going to take to determine the

expectation of the function.

Markov chains can be described as a type of probability process where the

outcome of the only the current state would affect the outcome of a future

state. Markov chains have a few properties that must be satisfied for MCMC

to be a viable option for filtering and that is the distribution we want to

sample must be homogeneous, reversible and ergodic. For Markov chains to

be homogeneous it requires the solution to depend on the elapsed time of the

process but not the absolute time, so if we want calculate Markov chains for

the probability of changing state i to state j over one day the probability of

the state transition will not change. Reversible requires if it is possible to

change from state i to state j it is also possible for there to be a transition

from state j to i. For Markov chains to be ergodic that means it is possible

to transition from one state to any other possible state but not necessarily in

one jump. What this means for Monte Carlo methods is that if our problem

does exhibit a Markov process then we can use Monte Carlo sampling to draw

samples and these samples will also be a Markov process and it is possible to

16

Page 18: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

Markov Chain Monte Carlo filters represents a general family of filter-

ing algorithms that uses the fundamental idea that if there is a mulitvariate

distribution that is difficult to integrate we are able to draw random sam-

ples that are homogeneous, reversible and ergodic Markov Chain with an

invariant distribution that is similar to our target density(problem). MCMC

filters are also known as offline/batch sampling method since for most ver-

sions of the MCMC method given our initial draw for the initial parameters

and state variables we can draw for next run of the MCMC filter by using

the parameters and state variables given the previous draw. Jacquier et al

(1994)[20] provided one of the earlier implementations of the MCMC method

on stochastic volatility models which they reported that it provided very ac-

curate results by using Gibbs-sampling and Metropolis-Hastings algorithm.

In Geweke and Tanizaki (2001) [15] they explained the benefits of using

a Metropolis-Hastings algorithm and the Gibbs-sampling method together

but first we need to define what are these two algorithms. The Metropolis-

Hastings algorithm is a version of MCMC algorithm where assuming if it is

difficult to use the exact distribution to draw samples from we can use draws

from a proposal distribution and later accept/reject the draws according to

an acceptance/rejection criterion if we reject we will redraw the samples.

Gibbs-sampling can be described as a special case of Metropolis-Hastings

algorithm, where under certain conditions the algorithm to solve a MCMC

problem becomes easier to solve. Cassella and George (1992) [9] explains that

normally for a metropolis-hastings algorithm we would need to approximate

probability density function by making random draws. However with Gibbs-

sampling we do not need to approximate probability density function, we

can sample from the conditional distributions which we would know from our

problem and under special conditions if we generate a very large sample from

the conditional distributions (e.g. f(x|y) and f(y|x) the results will converge

to the true marginal density f(x). Carlin et al (1992) [7], Carter and Kohn

(1994) [8] illustrated an approach that Gibbs-sampling can be used for solving

non-linear and non-gaussian state space models which were important for

17

Page 19: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

future literatures that applied MCMC methods to solve stochastic volatility

models since we able to simplify problems more by not requiring to draw

from the true marginal density or a proposal density.

6.5 Sequential Monte Carlo Methods/ Particle Filter-

ing

Recent development in bayesian estimation of stochastic volatility lead to

the use of particle filters. First published paper by Gordon, Salmond and

Smith(1993) [16] the method illustrated in the paper is commonly called the

bootstrap filter, however the it was first mentioned in 1970s however due to

the restraint in computer memory and power MCMC methods still remained

the popular method for nonlinear recursive filtering methods. The particle

filter is a recursive approximation method for filtering random variables by

particles. We do this by approximating a continuous variable by using dis-

crete sampling points; this method has been popular recently due to increase

in computing power.

Bootstrap filter (Gordon et al(1993)[16])is a sequential importance sam-

pling method however we eliminate low importance weights and multiply

particles that have high importance to avoid the weight degeneracy problem

that happens for sequential importance sampling. There are many benefits

of using the bootstrap filter is that it is generally quick and easy to imple-

ment for large variety of problems as we only need to change the importance

weight distribution to modify the bootstrap filter for a new problem.

The auxiliary particle filter was developed by Pitt and Shephard (1999)

[30] it was an improvement over the bootstrap filter as it allows us to ap-

proximate better the tails of distribution. It works by selecting a proposal

distribution which has a fatter tail than the true posterior distribution, by

resampling from the proposal distribution we are able to get a larger variety

of particles compared to resampling with the posterior distribution. Using

18

Page 20: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

this method helps reduce the effect of the convergence rate to get the true

posterior estimate but we can get a better picture of the posterior distribu-

tion and therefore it helps reduce the effects of the problem of particle decay

in the boostrap filter.

Particle filters have the advantage over MCMC methods is that it gen-

erally is much faster to apply and obtain results than the MCMC methods.

However one disadvantage is that particle filters with unknown parameters

have difficulties to obtain optimal estimations. There have been multiple

methods found in literature on particle filters; one class of methods is the

maximum likelihood method. The maximum likelihood method can be de-

scribed as we start off with initial estimates of the parameters to perform the

a version of the particle filter methods such as bootstrap/auxiliary filter then

parameters are changed slightly until we obtain the largest likelihood value

available in the search. The problem with the maximum likelihood method

is that it can be slow if the number of parameters needed to be estimate is

large and if the data series is large, so it can take a long time for the method

to converge to the maximum likelihood value for a complicated problem. An-

other problem would be the initial estimates used could become a problem

if it is far off from the true values of the parameters so it may take longer

to determine the results or it will never find the true parameters. Malik and

Pitt(2011)[26] has developed a method to improve the speed of finding the

parameters by smoothing the importance weights and sorting the particle in

order for each time step which in the long run improves the efficiency of the

algorithm to determine the unknown parameter.

Another method to determine the unknown parameters is the MCMC

particle filter where the Metropolis-Hastings or Gibbs-sampling algorithm is

used to sample the unknown parameters then we apply these parameters via

the auxiliary particle filter to obtain the latent state and the two steps are

performed for many iterations to obtain convergence in the parameters. The

MCMC particle filter that is explained in depth in Andrieu et al [3] and

discussed in Kantas et al (2009) [23] can also be a slow method to determine

19

Page 21: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

unknown parameters as there is a requirement to perform the particle filter

until there is convergence in the parameters. Though the MCMC particle

filter can be effective method to determine unknown parameters however it

does not take advantage of idea of the particle filter that is the latent states

are learnt sequentially over time so it must also be possible for the parameters

to be learnt sequentially over time.

The last class of particle filter methods are the sequential parameter

learning particle filters which will appear in this paper is a class of algo-

rithms where the parameters and latent states are sequentially learnt over

each time step in the algorithm. Development in sequential estimation of

parameters with particle filter usage includes papers by Liu and West(2001),

Storvik(2002) and Carvalho, Johannes, Lopes and Polson(2010). There have

been difficulties with sequential parameter learning methods as that there

was no successful solution since the development of the bootstrap filter to

sample for parameter evolution as using the parameter distribution condi-

tional on the observations will result in particle decay. With no parameter

evolution we could be left with particle decay in the parameter particles (in

general the particle filter will include a importance weights step to keep only

the good particles as a result there will be a decrease in different param-

eter particles until there is only one particle left) within a few time steps

in the particle filter algorithm which does not help in estimating unknown

parameters.

One of the first method successful sequential parameter learning methods

is the Liu and West filter (2001) which suggests the parameter distribution

is based on the mixture of multivariate normal. Using the mixtures of mul-

tivariate normal to generate samples for the parameters we will avoid the

particle decay problem in the parameters as the mixture distribution keep

dispersion in the parameter particles preventing particle decay. In a later

section we will discuss the steps in the Liu and West filter. The second

popular sequential parameter learning method is the Storvik filter (2002)

where the parameter distribution is dependent on a recursively updated suf-

20

Page 22: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

ficient statistics. The updated sufficient statistics is an independent process

which solves the particle decay problem as the parameter distribution will

not shrink during the resample step to choose good particles. The latest

sequential parameter learning method is known as the particle learning filter

developed by Carvalho et al (2010) which is similar to the Storvik filter as the

sufficient statistic is used to sample from the parameter distribution but it

improves the efficiency of the Storvik filter by incorporating a state sufficient

statistic which allows for faster convergence by minimising the variance of

the importance weights.

21

Page 23: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

7 Models

In this section we will define latent volatility, so first lets define spot asset

price as Pt at time t, if we assume a continuous diffusion process then similar

to what is shown in Harvey et al(1998) and Taylor (1986);

dSt = µtdt+√VtdB

pt (1)

d(Vt) = κ(θ − Vt)dt+ τdBVt (2)

where the parameters (µt, κ, θ, τ) are evolving with volatility and Bpt , B

Vt

are described as Brownian motions that can correlate.

If there are jumps in the model we can add in another component into

equation (1) that would capture jumps in asset prices, resulting in the same

equation found in Maneesoonthorn et al (2012) [27] and Eraker et al(2003)

[13].

dSt = µtdt+√VtdB

pt + dJpt (3)

The jump component dJpt is described as a random jump process such that

dJpt = Zpt dN

pt where the jump size is Zp

t follows an exponential distribution

with probability of jump dNpt happening with a bernoulli distribution.

To incorporate volatility jumps that Eraker et al(2003) [13] suggests im-

proves model fit we would need to modify equation (2) by incorporating a

jump component.

d log(Vt) = κ(θ − log(Vt))dt+ τdBVt + dJVt (4)

22

Page 24: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

Similar to the definition for the jump component for asset price, volatility

jumps is defined as dJVt = ZVt dN

Vt where ZV

t is exponential distribution for

size of the volatility jump and dNVt is a bernoulli distribution to model

volatility jump happening.

7.1 Stochastic Volatility Model (SVM)

Data collection is discrete however our model that was described in equation

(1) (observation equation) and (2) (state equation) are in continuous cases,

as Kim et al(1998), Maneesoonthorn et al (2012), and an application of Euler

discretization would be required on equation (1) and (2).

St = St−∆t + µt∆t+√Vt−∆tξ1t (5)

Vt = Vt−∆t + κθ∆t− κVt−∆t∆t+ τξ2t (6)

where (ξ1t, ξ2t) ∼ N(02,Σ), Σ =

[1 ρ

ρ σ2v

]

if we set ∆t = 1 the above equation will explain daily change in latent

variance. Equation (6) will become Vt = Vt−1 + κθ − κVt−1 + τξ2t = κθ +

(1 − κ)Vt−1 + τξ2t. If we let αv = κθ and βv = 1 − κ then equation (6) can

be transformed into a more simplified version:

Vt = αv + βvVt−1 + τξ2t (7)

The model allows for depiction of leverage effects between the stock price

and the volatility effect however Eraker et al (2003) [13] suggests that gener-

ally models without jumps in both asset price and volatility are misspecified.

Euler discretization were applied to equation (3) and (4),

23

Page 25: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

St = St−∆t + µt∆t+√Vt−∆tξ1t + ZS

t ∆NSt (8)

Vt = Vt−∆t + κθ∆t− κVt−∆t∆t+ τξ2t + ZVt ∆NV

t (9)

where ZVt

iid∼ exp(µv), ZSt

iid∼ exp(µs), ∆NVt

iid∼ Bernoulli(δv∆t) and

∆NStiid∼ Bernoulli(δS∆t).

If we let ∆t = 1 and reparameterize for αv = κθ and βv = 1− κ equation

(9) can be rewritten in as:

Vt = αv + βvVt−1 + τξ2t + ZVt ∆NV

t (10)

7.2 Realized Volatility

Extensive studies have been undertaken to look at the behaviour of realised

variance since the mid 1990s where many papers have modelled volatil-

ity measures by a generalized autoregressive conditional heteroskedasticity

model (GARCH) or by latent stochastic volatility models (SV-M). Survey pa-

pers of the research topic on realised volatility include McAleer and Medeiros

(2008)[28] and Corsi et al (2008)[11] where in McAleer and Medeiros it was

a review paper of the last decade of work on realised volatility and in Corsi

et al a proposed a different model (HAR-GARCH and ARFIMA)to forecast

and model realised volatility. Realized variance is formally defined as the

sum of squared returns and in this paper it will defined as the daily sum of

squared returns over five minute intervals.

Anderson et al (2003) found that if there were no microstructure noise

almost always the realised variance will converge in probability to the inte-

grated variance RVtp→ IVt, where IVt =

si∫si−1

σ2(t + s − 1)ds . It is said

that realised variation can be a consistent estimator of integrated variation.

24

Page 26: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

Now why are we interested inintegrated variation as it is stated as the true

underlying volatility state. Barndorff-Nielsen and Shephard (2002) was able

to show that if no microstructure noise assumption was applied then realised

variation is asymptotic normally distributed of the form:

√nt√

2IQt

(RVt − IVt)d→ N(0, 1)

where IQt =1∫0

σ4(t+ s− 1)ds is integrated quarticity.

In terms of modelling realised volatility many different papers offer differ-

ent models to try to model realised volatility. Christoffersen et al (2007) [10]

defined realised variance to have the form of RVt+1 = E[RVt+1|Vt]+ut, using

the results found from Ait-Sahalia (2007) [1] we can create the observation

equation to have the form:

RVt+1 = θ + [exp(−κ/252)− 1

−κ/252](Vt − θ) + ut

By making substitutions of β = 1 − κ and α = κθ we can rewrite the

above equation with the same parameters like equation (7) and (10).

RVt+1 =α

1− β+

exp(β−1252

)− 1β−1252

(Vt − θ) + ut (11)

Another model that is found in the literature on modelling realised volatil-

ity is

RVt+1 = Vt + σrvut (12)

this form is shown in Lopes and Tsay (2011) [25] where they had an example

of modelling realised volatility of Alcoa.

Many papers applied the various Heston’s stochastic volatility models

such as the Heston’s square root model, standard stochastic volatiliy model

25

Page 27: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

and three halves stochastic volatility model to fit realised volatility. Below

is a simplified example of the three stochastic volatility model forms:

Vt+1 = α + βVt + τrv√Vtut (13)

Vt+1 = α + βVt + τrvVtut (14)

Vt+1 = α + βVt + τrvVt3/2ut (15)

7.3 Bipower Volatility

Bipower volatility is described by Anderson et al(2007) as the variation def-

inition when there is no jump in volatility. The bipower volatility has a

mathematical form of

BVt =π

2

M∑t−1<ti≤t

|rti ||rti−1|

Where the√

2/√π is the expected value of an absolute value of a standard

normal distribution.

The graph of the difference in the realised variation and bipower variation

is shown in figure 1. In Anderson et al (2007) they suggested that only when

difference between realised volatility and bipower volatility is significantly

positive then it suggests that there is a volatility jump in that period oth-

erwise small jumps in the volatility indicates possibly noise in collection of

data. We can see from the figure 1 there is a large spike on 11 January 2001

which coincides with the U.S. Federal Trade Commision approving a merger

between AOL and Time Warner. Another spike is on 18 September 2007,

this coincides with the Bank of England injecting 4.4 billion pounds into

the U.K. Financial system and U.S. Federal Reserve cutting interest rates

As such Eraker et al (2003) suggests that if there is volatility jumps in the

26

Page 28: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

dataset then to avoid model misspecification it is better to take into account

both price and volatility jumps.

Figure 1: Difference in Realised and Bipower volatility of S&P500 Index

7.4 VIX

To be able to work out how to price options we need to specify a volatility

risk premium to the latent volatility equation to show a relationship of a

risk neutral stochastic volatility process. Using equation (2) to show the risk

neutral dynamics, we express volatility risk premium by λrnVt:

dVt = (κ(θ − Vt) + λrnVt)dt+ τdB∗t

dVt = (κ− λrn)(κθ/(κ− λrn)− Vt)dt+ τdB∗t

dVt = κ∗(θ∗ − Vt)dt+ τdB∗t (16)

27

Page 29: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

where dB∗t is a brownian motion for the risk neutral process, κ∗ = (κ− λrn)

and θ∗ = κθ/κ∗. If we took the Euler discretization of equation (16)

Vt+1 = Vt + κ∗θ∗ − κ∗Vt + τξ∗t

Vt+1 = α∗ + β∗Vt + τξ∗t (17)

now we let α∗ = κ∗θ∗ = κθ and β∗ = (1 − κ∗) = (1 + λrn − κ) to be risk

neutral parameters for the stochastic volatility model. The λrn parameter

represents the size of the risk premium that is placed when pricing options,

so the volatility risk premium will adjust to the size of the latent volatility.

Similar to modelling realised volatility different papers utilised different

models so in this paper we will explore the different VIX models to see which

performs better.

28

Page 30: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

8 Data

To model realised volatility, bipower volatility and option volatility we took

data from S&P 500 spot and option index. The observation range of the

collected data is July 1997 to August 2007. It covers many significant finan-

cial events such as the 1997 Asian Financial Crisis, the 2001 dot-com bubble

burst and the start of the recent 2007 Global Financial Crisis.

The data consists of three measures of variance:

1. Realised Variance (RVt =M∑

t−1<t≤tr2ti

)

2. Bipower variation (BVt = π2

M∑t−1<ti≤t

|rti ||rti−1|)

3. Option variation denoted as MFt ( MFt = (V IXt

100)2

Variance measure constructed by the prices of options denoted as , VIX is

the volatility index that is available on Chicago Board of Option Exchange).

Microstructure noise can be inherent in high frequency data which can

be problem in modelling high frequency data. Microstructure noise are any

types of trading activities that can cause bias results in the collection of

data and modelling phases these noises arises from infrequent trading, price

discreteness (rounding of prices) and bounces in bid/ask prices. The data

has been cleaned by a method mentioned in Maneesoonthorn et al (2012)

[27] and Brownless and Gallo (2006) [6] which should remove most problems

of microstructure noise. The data cleaning approach is each daily time series

collection of data is filtered such that outliers are trimmed down according

to a trimming parameter to preserve the behaviour of the series of the day.

Realised Variance was calculated as the annualised 5 minute intraday

returns for each trading day. From figure (1), we see that for the first 1500

observations oscillate around within 0 and 0.2 where once in a while there

is a large spike to 0.3 at around 1996, 1997, 1998 and then 2000, which

29

Page 31: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

first well to show that during financial crises the volatility tends to be more

unstable and would spike up higher than normal levels. After the first 1500

observations (around 2003) the realised variance fluctuates less until August

2007 (the start of the defaults of sub-prime mortgages began affect financial

institutions) where volatility began fluctuating more with a large spike to

1.03 in October 2008.

The bipower variance was calculated as the annualised 5 minute apart

returns which is calculated to represent return movements. From figure 2 we

notice that the bipower variance closely follows the shape and size of realised

variance.

The VIX which will now be described as the Option implied volatility

is the . From figure 2, the annualised variance over stable financial periods

is always higher than both than realised and bipower volatility. There is a

lag in its movement compared to realised and bipower volatility as it takes a

few days before a movement in realised/bipower volatility causes the option

implied volatility to move which makes sense as the option volatility is used

to price securities over 3 or multiple months so one time volatility shocks will

not enter into the option volatility until a few days later.

Figure 3, shows the quantile-quantile plots of the data and the natural

logarithm of the data. We see that for the non-transformed variation measure

time series it is not normally distributed as it does not fit well with the

straight line of the normal distributed quantile line. However interestingly

if we transformed the variation measures by a natural logarithm we are able

to see that the if fits well with the straight line so its possible to make

an assumption that the natural logarithm of the high frequency variation

measures are normally distributed or student t distributed.

30

Page 32: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

Realised Volatility Bipower Volatility Volatility IndexMinimum 0.000667 0.000383 0.0097811st Quartile 0.006112 0.005402 0.024930Median 0.011660 0.010440 0.042560Mean 0.022290 0.020770 0.0540503rd Quartile 0.021750 0.019760 0.062250Maximum 1.036000 1.033000 0.653800

Table 1: Summary of S&P 500 data

8.1 Data transformation

Due to the size of the raw data series if we took the natural logarithm of the

data series majority of the observations would become negative. As a result

can affect the estimation of the parameters and latent states since the latent

state must always remain positive, therefore we multiplied the raw series

by ten thousand. The table below is the summary statistic of the natural

logarithm of ten thousand multiplied by the raw data series.

Realised Volatility Bipower Volatility Option Implied VolatilityMinimum 6.672 3.826 97.81

1st quartile 59.898 53.092 243.20Median 113.163 100.739 419.43Mean 173.717 161.059 474.85

3rd quartile 203.928 189.164 601.72Maximum 2993.111 3459.148 2092.15

Table 2: Transformed data

31

Page 33: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

Figure 2: Graph of Data

32

Page 34: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

●●●●●●

●●● ●●●

●● ●●● ●●●●●●● ●●● ●●● ●●●●

● ● ●● ●●●●● ●●●● ●

● ●● ●●●●●● ● ●●●● ●●●● ●● ●●● ●●● ●●● ● ●

● ●●● ●●●●●●●●

●● ●●●●●

● ●●●●

●●●● ●●● ●● ● ●

● ●●●●●

●● ● ●●● ●

●●●●

●●●●

●●●

● ●●●●● ●●●●●

●●●●●●●●●●●●●

●●●

●●

●●●

●●●● ●●

●●●●●

●●●●

●●●●●●

●●●●●

● ●●●●●

●●●

●●●●●●●● ●● ●● ●●●

●●●●●●

●●

●●

●●●

●●

●●

● ●●●●

●●●●●●●●

●● ●●

●●

●●●●

●●

●●

●●●

●●●

●●

●●●●●

●● ●

●●●

●●

●●●● ●

●●●

●●

●●

●●●●

●●

●●

●● ●●● ●●●

●●● ●●●

●●●●●

●●●●

●●●● ●●●●

●●●●

●●

●●●●●●●

●●●●

●●●● ●●

●●●●●

●●●●●●●

●●●●●● ●●●●●● ●● ●●● ●●●●●

●●●● ●● ●●

● ●●●●●● ●●●●●●●●● ●●●● ●●●●●●●●● ●

●● ●●●●●●●●●

●●

●●

●●●●●●●●●●●●●

●●●● ●●●●

●●

●●

●●

●●

●●●●

●●

●●

●●

●●

●●

●●

●●

●●●

●●

●●●

●●

●●●●●

●●●●●●●●

●●●●●●●

●●●●●●●

●●●●

●●

●●●●●●●

●●

●●●● ●

●● ●●

●● ●●●●

●●

●●

●●●

●●

●●

●●

●●●

●●●

●●●

●●

●●●●●

●●●

●●●●●

●●

●●

●●

●●●●●

●●●●●

●●●

●●

●●●●●●●

●●●

●●

●●●●●

●●

●●●●●●

●●

●●

●●● ●●

●●●●●●● ●●

●●

●●

●● ●●

●●●●●●● ●

●●

● ●●●●●●

●●

●●

●●●●●●

●●●

●● ●●

●●●

●●●●●

●●●

●●

●●●

●●●●

●●

●●●

●●

●●●●●●●●

●●●●●●●●

●●●●

●●●

●●●

●●●

●●

●●

●●●●

●●●

●● ●●●●●

●●

●●●●

●●●●●

●●

●●

●●

●●●

●●●●●●●

●●

●●●

●●

●●●●●

●●●●

●●●

●●●

●●

●●●

●●

●●

●●●

●●●

●●●

●●●

●●

●●

●●

●●●●●●●●●●●●

●●●●●●

●●●●●●

●●●● ●●●

●●●●●

●●●●

●●● ●●●●●●●●●●●●● ●●● ●●●●●●

●●

●●● ●●

●●●●

●●●●●●●

●●

●●

●●

●●●

●●

●●

●●●●

●●

●●●

●●

●●

●●

●●●●●●

●●

●●●

● ●

●●●●

●●●●●●●●●●●●

●●●●●●●●●●●

●●●●

●●

●●

●●●

●●●

●●

●●●

●●●

●●●●●●

●●

●●●●●●●●●

●●●●●●

●●●●●●●●●●●●●●●●●●●

●●●

●●●●●●●●

●●

●●● ●

●●●●

●●●●●●

●●●●●●●●●● ●●●

●●●

●●●

●●●●

●●

●●●●●

●●

●●

●●

●●

●●●●●

●●●

●●

●●●●

●●●●

●●

●●

●●●

●●●●●●●●

●●

●● ●

● ●●●●●●●●●●

●●●● ●● ●●

●●

●●●

●●●●●●● ●●●●

●●

●●

●●●●●●●●●●

●●●●●

●●

●●

●●

●●●●●●●

●●●●

●●●●●●● ●●●●

●●●●

●●●●●●●●

●●●●●●●●●

●●●●●

●●●●●●

●●

●●

●●●

●●●

●●

●●

●●

●●

●●

●●

●●

●●

●●●

●●

●●●

●●●

●●●

●●

●●●

●●

●●●

●●

●●●

●●●

●●

●●

●●

●●●●

●●

●●●●

●●

●●

●●

●●

●●●●●●●●●

●●

●●●●●●●●●

●●●

●●●●●

●●

●●●

●●

●●●●

●●

●●●●

●●

●●●●

●●

●●

●●●

●●

●●●

●●

●●●

●●

●●●

●●

●●●●●

●●●

●●●

●●

●●●

●●●

●● ●●●●●●●●●●

●●●●●●●●●●

●●●●

●●●●●●

●●●●●●●●●

●●●

●●●●

●●

●●●●●●●

●●●●●

●●●●

●●●

●●●●●● ●

● ● ●●●●

●●

●●●

●●●●● ●●●●●●●●●●● ●●

●●

●●●●

●●● ●

●●●● ●●●●●●●●●●●●●

●●●●●●●●●●● ●●●●●● ●● ●●●● ●●●● ●●●● ●● ●●● ●● ● ●●●●● ●

●●

●● ●●●●●

●●●

●●●●●● ● ●

●●

●● ●●●●●● ●● ●●●

●●

●●●

●●●

●●

●●●

●●●●●●●●●● ● ●

● ●●●●●●●●

●●●●●●

●●

●●●●●

●●●●

●●●●●●●●● ●●●●● ●●●● ●●● ●● ●●●●●● ● ●●●● ●●●●

●●●●●●●●●

●●

●●●● ●●●●●●●●●● ●●●●●●●●● ●●●●●●●●●●●● ● ●●●●●●●●●●●●●● ●●

●●●●● ●●●●●●●●●●●●●●● ●●● ●● ●● ●●●●●●●● ● ●● ●● ●●●● ●●●●● ●● ●●●● ●

●●●●●●● ●●●●●

●●●●●● ●●● ●●●●● ●● ●● ●● ●● ●● ●●●● ●●●● ● ●●●●●●● ●●●●●● ●●●

●●●●●●●

●● ●●●● ●●●

●●●●●

●●

●●

●● ●●●●

●●●●● ●●●●● ●●

● ●● ●● ●● ●● ●●●●●● ●●●● ● ●●●● ●●●●●●●●● ● ●●●●●●●●●●● ●●● ●●●● ●●●● ●● ●● ●● ●●●●●●●●● ●●●●● ●● ●●●●● ●●● ● ●

● ●●●●

●●●●●

●●●●●●●●●●●●● ● ●● ● ●●● ●●●●● ● ●●●● ●●●●●● ●●●● ●●●●● ●●●●

●●●●●● ●●●● ●● ●●●

●●●● ●●●●● ●●● ●●● ●

●● ●● ●●● ●● ●●●●●● ●●●● ●● ●●●●●● ●● ●●●●●●●

● ●●●●●●●●● ●●

● ●● ●●●●● ● ●●●

●●●

●●●

●● ● ●●●●●●

●●

●●●●●●●●● ●●●

●● ●● ●● ●● ● ●●

●●●●●

●●●●● ●●● ●●

●●●●●●●●●● ●●●●● ●●●●●● ●● ●●●●●●●●●● ●●●●● ●●●● ●●●●●●● ● ●●● ●● ●●●● ●●● ●●● ●●●● ●●●●●●●●●●● ●●

●●● ●●● ●●●●● ●● ●●●●● ● ●●●●●●●●●●●● ●●● ●●●●●●●●●● ● ●●● ●●● ●● ●●●

●●

●●

●●●●●●

●●●●●

●●●

●●●●●●●●●● ●●●● ●● ●●● ●●●●● ●

● ●●● ● ● ●● ● ●●●● ● ●●●

●●●● ●● ●●●

●●●●●● ●●

●●●●

●●●●●● ●●●

●●●●●●●

●●

●●

●●

●●

●●●●

●●

●●●●

●●●●●●●●●

●●●●●●●

●●● ●●●●●●●●● ●● ●●●

● ●●●

●●●

●●

●●●

●●●

●●

●●

●●●●●

●●●●●●

●●●

●●

●●●●●

●●● ● ●●●

●●

●●●●

●●

●●●

●●

●●●

●●

●●

●●

●●

●●●●●

●●●

●●●

●●●

●●

●●●●

●●●●●●●●●●●●●●●●●● ●●

●●

●●

●●

●●●●●●●● ●●●●

●●●●●●●●

●●●●

●●●●

●●●●●●●

●●●●●

●●●

●●

●●

●●

●●

●●●●●●●

●●●●●●●

−3 −1 1 3

05

1015

2025

30

RV Q−Q Plot

Theoretical Quantiles

Sam

ple

Qua

ntile

s

●●

●●●●

●●● ●● ●●● ●●● ●●●●●●●●●● ●●● ● ●● ●● ● ●● ●●●●● ●●●● ●

●● ●● ●●● ●●● ● ●●●● ●●●● ●● ●●● ●●● ●●● ●●

● ●●● ●●●●●● ●●

●● ●●●●●

●●●●●

●●●● ●●● ●● ● ●

●● ●● ●

●●

●● ●●● ● ●●●

●●●●●●

●●●

● ●●●●● ●●●●●

●●●●●●●●●●●●●

● ●●● ●

●●●●●●●● ● ●●●● ●

● ●●●● ●●●●●●●●●●

● ●●●●●

●●●●

●●●●●●●●●●●●●●●

●●●●●●

●●●

●●●

●●●

●●

● ● ●●●

●●●● ●●●●●

●●● ●

●●

●●●●

●●

●●●

●●●●●

●● ●●

●●●●●

●●● ●

●●●

●●●

●●● ●●

●●●●

●●

●● ●●

●●

●●

●●●●● ●

●●● ●●●●●

●● ●●●●●●●

●●●● ●●●●

●●●●

●●

●●●●●●●●

●●● ●●●● ●●

●●●●●

● ●●●● ● ●●●●●●● ●●●●●● ●● ●●● ●●●

●●● ●●●●●●●●●●●●●● ●● ●●●●●●●●●●● ●●●● ●●●●● ●

●● ●●●●●●●●●

●●●

●●●● ●●●●●●●●●●

●●●●●●●

●●

●●

●●

●●

●●●

●●●●●●

●●

●●

●●

●●

●●

●●

●●●●

●●

●●

●●●

●●●●●●●

●●●●●●●●●●●●●●●

●●● ●●●●●●●

●●●●●●●●●●

●●●

●●● ● ●●●

●●●●

●●

●●

●●●

●●●

●●●

●●●●

●●●

●●●

●●

●●

●●●●●●●●

●●

●●●● ●●

●●●

●●●● ●●●●●●●

●●●●●

●●●●●●●

●●●●

●●●●●

●●

●●●●

●●●

●●●

●●● ●●●

●●●●● ●●●●

●●●

●● ●● ●●●●●●● ●

●●●

● ●●●●●

●●●

●●

●●●●●● ●●●

●●●●

●●

●●●

●● ●●●

●●●●

●●

●●●

●●●● ●

●●●

●●●●●●●●●●

●●●● ●●●●

●●

●●●●●●●●●●

● ●●●

●●

●●

●●●●● ●●● ●●●●●

●●

●●●●

●●●●●

●●

●●

●●

●●

●●●

●●●●●●●●●

●●

●●●

●●●●●

●●●

●●

●●

●●

●●●●●

●●

●●●

●●

●●

●●●

●●●●

●● ●

●●●

●●●

●●

●●●●●●● ●●●●●●

●●●●● ●

●●●●●● ●●●●● ●●●

● ●●●

●●

●●●●

●●●●●●●●●●●●●●●● ●●● ●●●●●●

●●●●●●●●

●●●

●●

●●●●●

●●

●●

●●●

●●●●●

●●

●●

●●●●●

●●

●●●

●●●

●●

●●

●●●●

●●●●●●

●●

●●●●

●●●●●●

●●●●●●●●●●●

●●●●●●●●●●●●●●

●●

●●

●●●●

●●

●●●

●●●

●●●●●●

●●●●●●●●●●

●●●●●●

●●●●●●●●● ●●●●●●●●

●●●

●● ●●●●●●●●●

●●●● ● ●●●●

●●●●●●

●●●●●●●●●●●●●●●●

●●●●●

●●●●●

●●

●●

●●

●●

●●●

●●●●●●●

●●●

●●

●●●●

●●●●●

●●●

●●●●

●●●●●●●●

●●●

●●●●●●●●●●●●● ●●●● ●● ●● ●● ●●● ●●●●●●● ●●●●●

●●

●●●

●●●●●●●●

●●

●●●●●●●●

●●●●●●●●●● ●●●●●●●●●●●●●● ●●●●●

●● ●●●●●●●●●●●●●●●●●

●●●●●●●●●●●

●●

●●

●●●

●●

●●●●●

●●

●●

●●

●●

●●

●●●●

●●●●●

●●●

● ●●

●●

●●●

●●

●●●●●●

●●

●●

●●●

●●

●●●

●●

●●

●●

●●

●●●

●●●●

●●●

●●

●●

●●

●●●●●●●●●●

●●

●●●●●●●●●●●●

●●●●●●

●●

●●●

●●●●●

●●●●

●●●●

●●●●●●

●●

●●●

●●

●●●

●●●

●●●

●●●

●●

●●

●●●●●●●●●

●●

●●

●●●

●●●●● ●●●●●●●●●●●●●●●● ●●●

●● ●●●● ●●●●

●●●●●●●●●●●● ●

●●●●

●●

●●●●●●●

●● ●●● ●● ●●●●●

●●●●●

●●●●●●● ● ●● ●●● ●

●●●

●●●●●●●●●●●●●●●● ●●●●

●●●●

●●●

●●●●●●●●●●●●●● ●●●● ●●●●●● ●●●●●●●●●●● ●● ●●●●

●●●● ●●●● ●● ●●● ●● ● ●●● ●● ●

●●

●●●●●●●●●

●●

●●●●● ● ●● ●● ● ●●●●●● ●● ●●●●

● ●●●

●●●●

●● ●●●●●●● ●●●●● ●●● ●●●●● ●●●●● ●●

●●●

●●●●●●

●●●●

● ●●●●●●●● ●● ●●●●●● ● ●●● ●● ●●●● ●● ● ●●●● ●●●● ●●●●●● ●●●●

●●●●●● ●●

●● ●●●●● ●●●●●●●●● ● ●●● ●●●●●● ●● ●●●● ●●●●●●●●●●● ●●●●●●● ●●●●●●●●● ●●●●●● ●●●●● ●● ●●●●●●●● ●●● ●● ●●●●●●●●● ●● ●●●●● ●●●● ●●● ●●● ●●

●●●●●● ●●●● ●●●● ●● ●● ●●●● ●●●● ●● ●●●●● ●●●●● ●●●●●●●● ●●● ●●●●●●● ●● ●●●●●●●

●● ●● ●●

●● ●●● ● ●●

●●●●●●● ●●●● ● ●

● ●● ●● ●● ●●●●●●● ● ●●●● ●●●●●●●●●● ●●● ● ● ●●●●●●●●●●●● ●● ●●●● ●●●● ●● ●● ●● ●●●●●●●●● ●● ●●● ●●●●●●● ●● ●●●

● ●●●●●

●●●●●●●●●●●●●●●●● ●●● ● ●●● ●●●●● ● ●●●● ●●●●●● ●●●● ●●●●● ●●● ● ●●●●●● ●● ●● ●● ●●● ●●●● ●●●●● ●●● ●●● ●

●● ●● ●●● ●● ● ●●●●● ●●●●●● ● ●●●●● ●● ●●●●●● ●● ●● ●● ●●●●●●●●

● ●●●●●●● ● ● ●●●●●

●●●

●● ● ●●●● ●●

●●

●●●●●●●●● ●●

●●● ●●●● ●● ●

●● ●●●●● ●●●●● ●●●●● ●●●●●●●●●● ●●●●● ●●●●● ● ●● ●●●●●● ●●●● ●●●●● ●●●● ●●●●●●● ● ●●● ●● ●●●● ●●● ●●● ●●●● ●●●● ●●●● ● ●●●

●●●● ●●● ●●●●● ●● ●●● ●● ● ●●●●●●●● ●●●● ●●● ●●●● ●●●●●● ●●●● ●●● ● ● ●●●

●●●●

●●●●●●

●●●●●

●●●● ●●●●●●●●● ●●●● ●●●●● ●●●● ●●● ●●●● ● ●● ● ●●●● ●●● ●● ●●● ●● ●●

●●●●●●●●●

●●●●●●●●●● ●●● ●●●●●● ●

●●

●●

●●

●●

●●●

●●

●●●

●●● ●●●●

●●●●●

●●●●●●

●●● ●●●●●●●●● ●● ●●●

● ●● ●●

●●●

●●

●●●●

●●

●●

●●

●●●

●●

●●

●●●●● ●●●●

●●

●●●●

●●● ● ●● ●

●●●

●●●

●●

●●

●●●

●●●

●●●

●●

●●●

●●●

●●●●

●●●●

●●●

●●

●●

●●●●●●●●●●●●●● ●●●●●●●● ●●

●●●●

●●●●●●●●●● ●●●●

●●●●●●● ●

●●●

●●●●

●●●●●●●●●

●●●●

●●●

●●

●●

●●●

●●●●●

●●●●●●●●●●

−3 −1 1 3

05

1015

2025

3035

BV Q−Q Plot

Theoretical Quantiles

Sam

ple

Qua

ntile

s

●●●●

●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●

●●●●●●●●●●●●

●●●

●●●●●●●●

●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●

●●●●●

●●●●

●●●●●●●●●

●●●●●●●●●

●●●●●

●●●●

●●●

●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●

●●

●●●●●●●●●●●●

●●●●●●●

●●

●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●

●●●

●●●●●

●●●

●●

●●

●●●●

●●●

●●

●●

●●

●●

●●●

●●●●●

●●●●●●

●●●●●

●●●●●●●

●●

●●●

●●●●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●●●

●●●●●●●●●●●●●

●●●●

●●●●

●●●●

●●

●●●●●

●●

●●

●●●●●

●●●●●●●

●●●●

●●●●

●●●●●●●

●●●

●●

●●●●●●●●●

●●●●●●●●●●

●●

●●●

●●

●●

●●

●●●

●●

●●●

●●

●●

●●

●●

●●

●●●

●●●●

●●

●●●

●●●●●

●●●●●

●●

●●●●●●

●●

●●●

●●

●●●

●●●●●●

●●

●●

●●

●●●●●

●●●

●●●

●●●

●●

●●●●●●●●●

●●●●

●●●●●●●●

●●●●●●●●●

●●●

●●

●●●●●●●●●●●●●

●●●

●●

●●●●●●●●●●●

●●

●●●●●●

●●●

●●

●●

●●●●

●●●●●●●

●●●●●●●

●●●●●●●●●

●●●●●

●●

●●●●

●●●●●●●●●●●●

●●

●●●●●

●●●●●●

●●●

●●●●●●●

●●●●

●●●●●●●●●

●●

●●●

●●

●●●

●●

●●●●

●●●●●●

●●●●●●●

●●●●●●●●●●

●●

●●●●●●●●

●●●

●●●

●●●●

●●●

●●●

●●●●

●●

●●●●

●●●●

●●

●●●●●

●●●●●

●●●

●●●●●●

●●●

●●

●●●

●●●

●●

●●●●

●●●●

●●

●●

●●●

●●

●●

●●●●●

●●

●●●●

●●

●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●

●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●

●●●●

●●●●

●●

●●●●●●●●●●

●●●●

●●●

●●●●

●●

●●●

●●●

●●●●●

●●●

●●

●●●●

●●

●●●●●●

●●●●●●●●●●●●●●●●●

●●

●●●●

●●

●●●●●

●●

●●●●●

●●

●●●●

●●

●●

●●

●●●●

●●●

●●●

●●●

●●●●●●●●

●●●●●●

●●●●●●●

●●●●●●●

●●●●●●●●●●●●●●●

●●●●

●●

●●●

●●●●

●●

●●●●●●●●●●

●●●●●

●●●

●●●●

●●●●

●●

●●●

●●●●●●●

●●

●●

●●●

●●

●●

●●●

●●

●●●●

●●●●●●●

●●●●●

●●

●●●●

●●●●

●●●●●

●●●●●

●●

●●●●●●●

●●●●●

●●●

●●

●●●

●●

●●●●●

●●●

●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●

●●

●●●

●●●●●●

●●●●●●●●●●●●●

●●●●●●●●●●

●●●●●●●●

●●●

●●

●●

●●●

●●●

●●

●●●

●●

●●

●●●●

●●●●●

●●●

●●●●

●●●●

●●●

●●

●●

●●

●●●●●●

●●●

●●●●●

●●●

●●●

●●●

●●●●●●●

●●●●

●●●

●●●●

●●●

●●●●●

●●●●●●

●●●●●

●●●●●●●●●

●●●●

●●●●●●●

●●●

●●●

●●●

●●

●●●●●●●●●●●

●●●

●●●●

●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●

●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●

●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●

●●

●●●●●

●●●●●●●●●●●●●

●●●

●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●

●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●

●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●● ●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●

●●●●●●●●●●●●●●●●●

●●●●●●

●●●●

●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●● ●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●

●●

●●

●●●●●●●

●●●●●●●●●●

●●●●●●●●●

●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ● ● ●●●● ●●●● ●●●●●●● ●●●● ●●●●●●●●●●●●● ●●●●●●● ●●●●●●● ●●●● ●●●●●● ●●● ●●

●●●

●●

●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●

●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●

●●

●●●

●●●

●●●

●●

●●●

●●

●●●●

●●

●●●●●

●●●●●●●●●●●●●

●●●●●

●●●●●

●●

●●●●●●

●●●

●●

●●

●●●

●●

●●●●●

●●●●

●●●●

●●

●●

●●●●●

●●●●●●

●●●●●●

●●

●●●

●●●●

●●

●●

●●●●

●●●●●●●●

●●●

●●●●

●●

●●●

●●

●●●●●●●

●●●●●●●●●●●

●●●●●●●●●●●

●●●●●●●

●●●●●●●●●

●●●●●●●●

●●●●●

●●●●●●●●

●●●●

●●●

●●

●●●

●●●

●●●

●●●

●●●●●

●●●●●●●

−3 −1 1 35

1015

20

VIX Q−Q Plot

Theoretical Quantiles

Sam

ple

Qua

ntile

s

●●●

●●

●●

●●

●●

●●●●

●●●

●●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●●

●●●●●●

●●

●●

●●

●●

●●●

●●

●●

●●

●●

●●

●●●

●●

●●●

●●

●●●

●●

●●

●●●

●●

●●●

●●

●●●●

●●

●●

●●●

●●

●●

●●

●●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●●

●●●

●●●

●●

●●●

●●

●●

●●●

●●

●●●

●●

●●●

●●●●●●

●●●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●●

●●●●

●●

●●

●●●●●●

●●

●●

●●●

●●

●●

●●

●●

●●●

●●●

●●●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●●

●●

●●

●●●

●●

●●●

●●

●●●

●●

●●

●●●

●●

●●

●●

●●●

●●●

●●

●●

●●

●●

●●●

●●

●●●

●●

●●

●●

●●●●●

●●

●●

●●

●●

●●●●

●●

●●●

●●

●●●●

●●

●●●

●●●

●●

●●●

●●●

●●

●●●●

●●

●●

●●

●●

●●

●●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●●

●●●

●●

●●

●●

●●

●●●●●

●●

●●

●●

●●●

●●

●●●

●●●

●●●

●●

●●●

●●

●●●●

●●●

●●

●●●

●●

●●●

●●●●

●●

●●●

●●

●●●

●●

●●

●●

●●

●●●

●●

●●●

●●

●●

●●

●●●

●●

●●●●

●●

●●●

●●

●●●

●●

●●

●●●

●●●●

●●

●●●

●●

●●●●

●●

●●

●●●

●●

●●●

●●

●●

●●

●●●●

●●

●●

●●

●●

●●

●●●●

●●

●●

●●

●●

●●●●●

●●●●

●●

●●●

●●●

●●●

●●

●●

●●

●●

●●

●●●

●●

●●●

●●

●●●●

●●●●●●●●

●●●

●●●

●●●●●

●●●

●●

●●●●

●●

●●●●●●●

●●

●●

●●●

●●●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●●

●●

●●

●●

●●

●●

●●●

●●

●●

●●●

●●●●

●●

●●●

●●●

●●

●●●●●

●●●

●●

●●

●●

●●

●●●●

●●

●●

●●

●●

●●●●

●●●

●●●●

●●

●●

●●

●●●

●●

●●

●●

●●

●●

●●

●●●●●●

●●●●

●●

●●●●

●●

●●

●●

●●

●●●

●●

●●

●●

●●

●●

●●●

●●

●●●●

●●●●●

●●●

●●●

●●

●●●●

●●

●●

●●

●●●

●●

●●

●●●

●●

●●

●●

●●●●

●●

●●

●●

●●

●●●

●●

●●●●

●●●●

●●

●●

●●

●●●

●●

●●

●●

●●

●●

●●●●●

●●●●

●●

●●

●●

●●

●●

●●

●●

●●●

●●●

●●

●●

●●

●●

●●

●●

●●●●●●●●

●●●●●●●

●●

●●●

●●●●

●●●

●●●●●

●●●

●●●

●●

●●●

●●

●●●●

●●●

●●

●●●●

●●

●●

●●

●●

●●●●●

●●

●●

●●●●

●●●●

●●

●●

●●

●●

●●

●●●●

●●●

●●●

●●

●●●

●●

●●

●●

●●

●●

●●

●●●●●

●●

●●●●

●●

●●

●●

●●

●●

●●

●●●

●●

●●●

●●●

●●

●●

●●

●●

●●

●●

●●●

●●●●

●●

●●

●●

●●

●●●

●●●

●●●●●

●●●

●●

●●●

●●●●●

●●

●●●●●●

●●

●●

●●●●●

●●

●●●

●●●

●●●●●●

●●

●●●●

●●

●●

●●

●●

●●

●●●

●●●●

●●●●

●●

●●●

●●

●●

●●

●●

●●

●●

●●

●●●

●●●

●●

●●

●●

●●●●●

●●

●●

●●

●●

●●

●●●●

●●

●●

●●●

●●

●●●●

●●●

●●

●●●

●●●

●●●●

●●

●●●

●●

●●

●●

●●

●●●

●●●●●

●●

●●

●●●

●●

●●

●●

●●

●●

●●

●●

●●●

●●

●●

●●

●●●

●●

●●

●●●

●●

●●

●●

●●

●●

●●

●●●

●●

●●

●●

●●●

●●●

●●●●

●●●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●●●

●●

●●

●●●

●●

●●

●●●

●●

●●

●●●

●●

●●

●●●

●●●●

●●

●●●

●●●

●●

●●

●●

●●

●●

●●

●●●●

●●●

●●

●●

●●

●●

●●●●

●●

●●

●●●●●

●●

●●

●●

●●●

●●

●●

●●

●●

●●

●●

●●

●●

●●●

●●

●●

●●●

●●

●●

●●●

●●

●●

●●

●●

●●

●●

●●

●●

●●●

●●●●

●●

●●

●●●

●●

●●

●●

●●

●●●●

●●

●●

●●

●●

●●

●●

●●●

●●

●●

●●

●●

●●

●●●

●●

●●●

●●

●●●

●●

●●

●●●●

●●●

●●●

●●●

●●

●●

●●

●●

●●●●

●●●

●●

●●●

●●

●●●

●●●●●●

●●●

●●●●●●

●●

●●

●●●

●●

●●

●●●●

●●●●●

●●

●●

●●●●

●●●●●

●●●●●

●●

●●

●●

●●

●●●

●●●

●●

●●●●

−3 −1 1 3

−2

−1

01

23

Log RV Q−Q Plot

Theoretical Quantiles

Sam

ple

Qua

ntile

s

●●

●●

●●

●●

●●

●●●

●●●

●●

●●

●●

●●

●●

●●

●●●

●●●

●●

●●

●●●

●●●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●●

●●

●●●

●●

●●●

●●

●●●●

●●●

●●●

●●

●●

●●

●●

●●

●●●

●●●

●●

●●

●●●●

●●

●●

●●

●●

●●●

●●

●●

●●

●●

●●

●●

●●

●●

●●●

●●

●●

●●

●●●●

●●

●●●

●●

●●●●

●●

●●

●●

●●

●●

●●

●●●

●●

●●●

●●●

●●

●●●●

●●

●●

●●

●●●●●

●●

●●●

●●

●●

●●●

●●●●

●●●●

●●

●●●●●

●●

●●

●●

●●●●●

●●

●●●

●●

●●

●●

●●●

●●●

●●●

●●●

●●

●●

●●●

●●

●●

●●

●●

●●●

●●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●●●

●●

●●

●●●

●●●

●●

●●

●●

●●●

●●●●●

●●

●●

●●●

●●

●●

●●

●●●

●●●

●●

●●

●●

●●●

●●

●●●

●●

●●

●●●

●●

●●

●●

●●●

●●

●●

●●

●●●●

●●

●●●●

●●●

●●●●

●●●

●●

●●

●●

●●

●●

●●

●●

●●●●

●●

●●

●●

●●

●●

●●

●●

●●●

●●

●●

●●

●●

●●

●●

●●●

●●

●●●

●●

●●●

●●●

●●●●

●●

●●

●●●

●●

●●

●●●

●●

●●

●●

●●

●●

●●●

●●●

●●●●●

●●

●●●

●●●

●●

●●

●●

●●

●●

●●

●●

●●

●●●

●●

●●

●●

●●●

●●●

●●

●●

●●

●●

●●●

●●

●●

●●

●●

●●●●●

●●●●

●●●●

●●

●●

●●●●

●●

●●

●●

●●

●●

●●

●●●

●●

●●

●●●

●●●●

●●

●●

●●

●●●

●●

●●

●●

●●

●●

●●●●

●●●●●

●●●●

●●

●●

●●●

●●●●●

●●●

●●

●●

●●

●●

●●

●●●

●●

●●●●

●●●

●●●●

●●●●

●●

●●●●●

●●●●

●●●

●●

●●

●●●●●

●●●

●●

●●●

●●

●●

●●

●●

●●

●●

●●

●●●

●●

●●●

●●

●●

●●

●●

●●

●●

●●●●

●●●

●●

●●●

●●●●●

●●

●●●●

●●●

●●

●●

●●

●●

●●

●●●

●●

●●

●●●

●●

●●●●

●●

●●

●●●

●●●●

●●

●●●●

●●

●●

●●

●●●

●●

●●●

●●●

●●

●●

●●●

●●

●●

●●●●●●

●●

●●

●●

●●●

●●●●●

●●●

●●

●●

●●

●●●

●●

●●

●●

●●●

●●

●●●●

●●●●●●

●●●

●●●

●●●

●●

●●●●●●

●●

●●●

●●

●●●

●●

●●●

●●

●●

●●

●●

●●●

●●●

●●

●●●●●

●●●●

●●

●●●

●●

●●●

●●

●●●

●●●

●●

●●

●●

●●●●

●●

●●●●

●●

●●

●●

●●

●●

●●●

●●

●●

●●

●●●●●

●●

●●

●●

●●●

●●

●●

●●●●●●

●●●●

●●

●●●

●●●●

●●●

●●●●●

●●

●●●

●●

●●●

●●

●●●●

●●●●

●●●●

●●●

●●

●●

●●●

●●●●●●

●●

●●

●●●●

●●●●

●●

●●●

●●

●●●

●●●

●●●

●●●

●●●

●●

●●●

●●

●●

●●

●●

●●

●●●●●

●●

●●

●●●●

●●●●

●●

●●

●●

●●

●●

●●●

●●

●●

●●

●●

●●●

●●

●●

●●

●●

●●

●●

●●●

●●

●●●

●●●●

●●●●

●●

●●

●●

●●●●●

●●

●●●

●●

●●

●●●●

●●

●●●●

●●

●●

●●●●●

●●

●●

●●

●●

●●●●

●●

●●●

●●●●●●

●●

●●●

●●

●●●●

●●

●●

●●●

●●

●●

●●●

●●

●●

●●

●●●

●●●

●●

●●

●●

●●●

●●●

●●●

●●●

●●

●●

●●●

●●

●●

●●●●●

●●

●●

●●

●●

●●

●●

●●

●●

●●●

●●●

●●●●●●

●●●●

●●

●●●

●●●

●●●●●

●●

●●

●●●

●●

●●●

●●●●●

●●

●●

●●

●●●●

●●

●●

●●

●●

●●●

●●

●●

●●●

●●

●●

●●●

●●

●●●

●●●

●●

●●●

●●●

●●

●●●

●●

●●

●●

●●

●●●

●●

●●

●●

●●●●

●●●

●●●

●●●●●

●●

●●●

●●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●●

●●

●●●

●●●

●●●

●●

●●

●●●●●

●●

●●●●

●●●

●●●●●

●●●

●●●

●●●

●●

●●

●●

●●

●●

●●

●●

●●

●●●●

●●

●●

●●

●●

●●●●

●●

●●

●●●

●●

●●

●●●

●●

●●

●●

●●

●●

●●

●●●

●●

●●

●●

●●

●●

●●

●●

●●●

●●●●

●●●

●●●

●●

●●●●

●●

●●

●●

●●

●●●

●●

●●

●●●

●●●

●●●

●●●

●●●

●●

●●

●●●

●●

●●

●●

●●

●●●●

●●

●●

●●

●●●

●●

●●●

●●●

●●

●●

●●

●●

●●

●●●

●●●

●●●

●●

●●●

●●

●●●

●●●

●●

●●●

●●

●●

●●

●●

●●

●●

●●●

●●

●●●

●●●

●●●

●●●●●

●●

●●●●●

●●

●●

●●

●●●●

●●●●

●●●●

●●●●●

●●

●●●

●●

●●●●●

●●

●●●●

●●

●●

●●

●●

●●

●●●

●●●●●

●●

−3 −1 1 3

−3

−2

−1

01

23

Log BV Q−Q Plot

Theoretical Quantiles

Sam

ple

Qua

ntile

s

●●

●●

●●●

●●●●

●●●

●●●●●●●●●●●

●●

●●●●●

●●

●●●●●

●●●●●●●●●●●●●●●

●●●●●

●●●●●●●●●●●●●●●●●

●●

●●

●●●●●●

●●●●●●●●●●

●●

●●●●

●●●●●

●●●●

●●

●●●●●

●●●●●●●

●●●●●

●●●

●●

●●●●●●●●

●●●●●●●●

●●

●●●●●●●●●●

●●●●

●●●●

●●●●●●●●●●●●●●●

●●●

●●●●

●●●●

●●

●●●●●●●●

●●●●

●●●●●●●●●●●●●●●●●●

●●●

●●●●●●

●●

●●●●●●●●●

●●●

●●●●

●●●

●●

●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●

●●●

●●●●●

●●●

●●

●●

●●●●

●●●●●

●●●

●●

●●●

●●●●

●●●●

●●●●●●

●●●●●

●●●●●●●

●●

●●●

●●●●●●

●●●●●●●●

●●

●●●●●

●●●●●●●●●●●

●●

●●●●●●●●●●●

●●●●

●●●●

●●●●

●●

●●●●●

●●

●●

●●●●●

●●●●●●●

●●●●

●●

●●●

●●

●●●●●

●●

●●

●●●

●●●●●●

●●●●●●●●●●

●●

●●●

●●

●●

●●

●●●

●●

●●

●●●

●●

●●

●●

●●

●●

●●

●●●

●●●●

●●●●●

●●

●●

●●

●●●●

●●

●●●●●●●●

●●●●●

●●

●●●●●●

●●●

●●●

●●

●●●

●●●●●●

●●●

●●

●●

●●●●●

●●●

●●

●●●

●●●

●●

●●●●●●●●●

●●●●

●●●●●●●●

●●●●●●●●●

●●●

●●

●●●●●●●●●●

●●●

●●●

●●

●●●●●●●●●●●

●●

●●●●●●

●●●

●●

●●

●●●●

●●●●

●●●

●●●●●●

●●●●

●●

●●●

●●●●

●●●●

●●●●●●●●●●●●

●●

●●

●●●

●●●●●●

●●●

●●●●●●●

●●●●

●●●●●●●●●

●●

●●

●●

●●●

●●

●●●

●●●●●●

●●●●●●●

●●●●●●

●●●

●●

●●●●●●●

●●●●

●●●

●●●●●●

●●●

●●●●●

●●●●

●●●●

●●

●●●●●

●●●●●

●●●

●●

●●●●

●●

●●●

●●●

●●

●●●●●●●●

●●

●●

●●●

●●

●●

●●●●●

●●

●●●●

●●

●●●●●●●

●●●●●●

●●●●●●●●●

●●●●●●●●●

●●●

●●

●●●●●

●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●

●●

●●●●●●●

●●●●●●●●

●●●●●●

●●●●

●●●●

●●●●

●●

●●

●●●●●●●●●●

●●●●

●●●

●●●●

●●●

●●●

●●●

●●●●●

●●

●●●

●●

●●●●

●●

●●●●●●

●●●●●●●●●●●●●●●●●

●●●●

●●●●●●

●●●

●●●●●●●

●●

●●●●

●●

●●●

●●

●●●●

●●●●

●●

●●●

●●●●●●●●

●●●●●●

●●●●●●●

●●●●●●●

●●●●●

●●●●●●

●●●●

●●●●

●●

●●●

●●●●

●●

●●

●●●●

●●●●

●●●●●

●●●

●●

●●

●●●●

●●●

●●

●●●

●●●●●●●●

●●

●●●●●●

●●

●●

●●●●

●●●●●

●●

●●●●●●●

●●●

●●●

●●

●●●

●●●●

●●●●●

●●●●●

●●

●●

●●●●●●

●●

●●

●●●

●●

●●●

●●●

●●●

●●●

●●●●●

●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●

●●●

●●●

●●●●●

●●●

●●●●●

●●●

●●

●●●

●●●●●●

●●●●●●●●●●

●●●●●●●●

●●●

●●

●●●

●●●●●●●

●●

●●

●●●

●●

●●●●

●●●●

●●●●●

●●●

●●●

●●●●●

●●●●

●●●●

●●●

●●●●

●●●

●●●

●●

●●●●●●

●●●

●●●●●●

●●●●

●●●●

●●●

●●●●●●●

●●

●●●●

●●●●

●●●●

●●●

●●●●●

●●●●●●

●●●●●●●●●●●●●●

●●●●●●●

●●●●

●●●●●●

●●●●●

●●●

●●●●●●●●●●

●●●

●●

●●●●

●●●●●●●●●●●●●

●●●●●●

●●●

●●●●●●●●●●●●

●●●●●●●●●

●●●●●●●●●●●●●●●●●●

●●●●●

●●●●●●●

●●

●●●

●●●●●●●●●●●●

●●●●●●●●●

●●●

●●●●

●●

●●●●●

●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●

●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●

●●●●●●●●

●●●●●●●

●●●●●●●●●●●●

●●●●●●●●

●●

●●

●●●

●●

●●●●●●●●●●●

●●●●

●●

●●●●

●●●●●●

●●

●●●●●

●●●●●●

●●●●●●●●

●●●●●

●●●●●

●●●●●●

●●●●●●●

●●●●●●

●●●●●●●●●●●

●●●

●●●

●●●●●●●●●●●●●●

●●●●●●●●

●●●●●●●●●●●●●●●●

●●●●●

●●●●●●

●●●●●●●

●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●

●●●

●●●●●●●●●

●●●●●●●●●

●●●●

●●●

●●●●

●●●●●●

●●

●●●●●●

●●●●●●●●●●●●●●●●●●●●●●

●●●●

●●

●●●●●

●●●●●●●

●●

●●●●

●●●●●●●●

●●

●●●●●

●●●●●●●●●●

●●●

●●●

●●●●

●●●●●●

●●●●

●●●

●●

●●●●●●●

●●●●●

●●●●●●●●●●●

●●●

●●●

●●●●●

●●

●●●●●●

●●●●

●●●●●●●

●●●

●●

●●●

●●●

●●●●●●

●●●●●

●●●●●●

●●●●●●●●●

●●●

●●●●●

●●●

●●●●

●●●●●●●●●

●●●

●●●●●

●●●

●●●●●

●●●●

●●●

●●

●●●●

●●●●

●●●●●

●●

●●●●●

●●●●●

●●●●●●●

●●●●●●

●●●●●●●●●

●●●●●●●●●

●●●

●●●●●●

●●

●●●●●

●●●

●●●●●●

●●●

●●●●●

●●●●

●●●

●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●

●●●●●●

●●●●

●●●●●●

●●●●●●●

●●

●●

●●●●●●●●●●●●●●

●●●●●●●●●● ●●

●●

●●●

●●●

●●

●●●●●

●●●●●●

●●

●●

●●●●

●●

●●●●

●●

●●

●●●●●●●●●●●

●●

●●●● ●●●●

●●

●●

●●

●●●

●●●●

●●●●●

●●●●●●●●

●●

●●●●●●●

●●●●●

●●●●●●●●●

●●●●●

●●●●●●●

●●●●●

●●

●●●●

●●

●●

●●●●●●●●

●●

●●●●●●

●●

●●

●●

●●●

●●

●●

●●●

●●

●●

●●

●●●●

●●

●●●●●

●●●●●●

●●●●●●●

●●

●●

●●

●●

●●

●●

●●●

●●●

●●●

●●

●●

●●●●

●●●●●

●●●●

●●●●●

●●

●●●

●●

●●●●●●

●●●●●●

●●

●●

●●

●●●●

●●●●●●

●●●●●●●●

●●●

●●●●

●●●

●●●

●●

●●●●●●●

●●●

●●●●●●●●

●●●●●●●●●●●

●●●●

●●●

●●●

●●●●●●

●●●●●

●●●

●●●●

●●●●●●●●

●●●●

●●●

●●●●●

●●●

●●

●●●

●●

●●●

●●●●●●●

−3 −1 1 3

0.0

0.5

1.0

1.5

2.0

2.5

3.0

Log VIX Q−Q Plot

Theoretical Quantiles

Sam

ple

Qua

ntile

s

Figure 3: QQ plots of Data and Log Data

33

Page 35: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

9 Method

9.1 MCMC Method

9.1.1 Outline of MCMC Method

Markov chain monte carlo methods are a family of algorithms that uses

the concept of markov chains to propagate the future samples of the latent

variables. For all models we applied a 2000 burn in and 5000 draws.

The Forward Filtering Backward Sampling method was applied to repre-

sent the markov chain monte carlo filter methods. As the name of the method

implies there are two steps in this application, the first is the forward filter

which is described in Hore et al (2010) [18] as an evolution and update step.

The next section of the method is the backward sampling where we draw

samples from a joint distribution of the states.

Forward Filter

Evolution step: Using bayes theorem and assuming the distribution fol-

lows a markov chain we can say that the distribution of the state variables for

the future, present and past is P (xt−1, xt, xt+1 | Yt) = P (xt−1, xt | Yt)P (xt+1 |xt, xt−1, Yt) = P (xt−1, xt | Yt)P (xt+1 | xt). Since on the right hand side we

have P (xt−1, xt | Yt) which is the distribution we should know from our pre-

vious step then we can marginalise out xt−1 so the distribution of the state

variables can be represented in the form of:

P (xt, xt+1 | Yt) = P (xt | Yt)P (xt+1 | xt)

34

Page 36: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

Update step: Applying bayes theorem of the above equation such that

P (xt, xt+1 | Yt+1) ∝ P (xt+1, xt, Yt+1 | Yt)

= P (xt, xt+1 | Yt)P (Yt+1 | xt, xt+1)

P (xt, xt+1 | Yt)P (Yt+1 can be found in the previous evolution step and

the second term P (Yt+1 | xt, xt+1) is determined by the observation equation.

Backward sampling

As the name of the step implies we sample from a distribution determined

by our previous states so the distribution would have the form

P (x1, x2, ..., xt | Yt) = P (xt, xt−1 | Yt)1∏

i=t−2

p(xi | xi+1, xi+2, ..., xt, Yt)

9.2 Particle Filter

9.2.1 Outline of Particle Filter Methods

Two particle filter methods called Liu and West (LW) filter and the particle

learning (PL) filter were used to model the realised variance, bipower variance

and the option variance. These two methods sequential parameter estimation

allows the user to model for both state and parameters evolve over time.

Previous particle filter methods before the LW and PL filters that could

estimate both parameters and states were methods where a particle filter

was applied to model and using maximum likelihood estimation we adjust the

initial parameters for the best fit or they could first apply an MCMC method

to obtain draws from the parameter distribution then using the parameters

we apply it to a particle filter algorithm. These two general methods did not

35

Page 37: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

take advantage of the sequential nature of the particle filter (such as MCMC

then particle filter) or are very computational difficult and expensive to do

(particle filter and apply MLE to obtain parameter estimates). N=5000

particles were used for both the Liu and West and Particle Learning filter,

this allows us to be able to compare the effectiveness of the particle and

MCMC filter as the number of samples drawn would be equal.

9.2.2 Liu and West Filter

Liu and West particle filter (LW) is an extension of auxiliary particle filter

where the method was adjusted to account for sequential parameter esti-

mation. The auxiliary particle filter was developed to account for better

modelling of the tails of a distribution. The general algorithm for the Liu

and West filter is:

1. Resample the latent state variables and parameters using weights (at

the first time step the importance weights has the expression and for t > 1

weights are generated from step 4)

2. Propagate the parameter from sampling from a normal distribution

with mean m(θ(i)t ) and variance V

3. Propagate the latent state variables by sampling from a state transition

distribution with the form of p(xt+1 | x(i)t , θ

(i)t+1)

4. Compute the weights to be used for step 1, these weights are generated

by w(i)t+1 ∝ w

(i)t

P (yt+1|x(i)t+1,θ(i)t+1)

P (yt+1|g(xi),m(˜θ(i)t ))

5. Repeat step 1

36

Page 38: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

9.2.3 Particle Learning Filter

Particle Learning(PL) uses the concept of the kalman filter where a state

sufficient statistic and parameter statistics are updated for each time step

of the particle filter, this allows us to sample from an updated parameter

distribution and we can resample the parameter samples from a distribution

determined by both the state sufficient parameters and the previous latent

states. Lopes and Tsay (2011) were able to show that for general cases

the particle learning filter can converge faster than the Liu and West filter

however the particle learning method becomes more difficult to use for non-

general state space models.

The general particle learning algorithm is:

1. Resample the parameters(θ), parameter statistics (st) and state-sufficient

statistics (sxt ) using weights generated from the distribution weights ∝ P (yt |st−1, θ) which is the

2. Sample the latent state variables (xt) from a distribution P (xt |sxt−1, θ, yt)

3. Update the parameter statistics statistics st = (st−1, xt, yt)

4. Sample the parameters from p(θ | st)

5. Update the state sufficient statistics sxt = (sxt−1, θ, yt)

6. Repeat from step 1

37

Page 39: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

9.3 Models

Integrated variation is the unobserved state(Vt) that we would be looking for

using the following models below:

Model A: Local level model

Yt+1 = Vt+1 + σyεt (18)

Vt+1 = Vt + τvηt (19)

The local level model is able to show that the variation measures are

the approximately the true measures of integrated variation. The local level

model is shown in Carvalho et al(2010) for realised volatility filtering.

Model B -AR(1) model:

Yt+1 = Vt+1 + σyεt (20)

Vt+1 = α + βVt + τvηt (21)

The autoregressive model is one of the earlier models in stochastic volatil-

ity modelling that was used to determine the latent volatility state as such

it will also appear in this paper to determine which model is better.

Model C -SV-AR(1) model:

Yt+1 = Vt+1 + σyVt+1εt (22)

Vt+1 = α + βVt + τvηt (23)

The stochastic volatility autoregressive model is an extension of the au-

toregressive model since in the observation equation there is an extra variable

for the observation noise that is affected by the size of the latent state. It

allows us to take account for the leverage effect of volatility that is the noise

38

Page 40: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

in the observation will get larger during times when volatility is greater and

the noise will become smaller when the latent volatility is smaller.

Model D- Heston’s Square Root Stochastic Volatility Model

Yt+1 = Vt+1 + σyVt+1εt (24)

Vt+1 = α + βVt + τv√Vtηt (25)

Model E- Heston’s Stochastic Volatility Model

Yt+1 = Vt+1 + σyVt+1εt (26)

Vt+1 = α + βVt + τvVtηt (27)

Model F- Heston’s 3/2 Stochastic Volatility Model

Yt+1 = Vt+1 + σyVt+1εt (28)

Vt+1 = α + βVt + τvV3/2t ηt (29)

Model D, E and F Models D, E and F are stochastic volatility models

that appear in Christoffersen et al (2010)[10]. The latent state equation

uses the Heston stochastic volatility model which Steven Heston was able to

show that these models were able to represent the underlying volatility of

a model very well in his 1993 paper about bond and currency options. By

incorporating these models we would have a larger variety of models to do

perform model selection from.

9.3.1 How observation errors are distributed

According to our analysis of the QQ-plot the natural logarithm of the vari-

ance measures are either normally distributed or student-t distributed. As

39

Page 41: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

such we will assume that observation errors are either normally distributed

or student t-distributed, for simplicity we will assume the state transition

equations to be normally distributed.

An example of model D which is now normally distributed

P (Yt+1 | Vt+1, θ) ∼ N(Vt+1, σ2V 2

t+1)

P (Xt+1 | Vt, θ) ∼ N(α + βVt, τ2Vt)

An example of model E which has observation errors student t-distributed

P (Yt+1 | Vt+1, θ) ∼ T (Vt+1, σ2V 2

t+1)

P (Xt+1 | Vt, θ) ∼ N(α + βVt, τ2V 2

t )

We will assume that the student-t distribution has 10 degrees of freedom.

Therefore the importance weights calculated for the particle filter would now

be student t-distributed.

40

Page 42: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

10 Results

10.1 Marginal Log Likelihoods

From table 3, we can see that for all models the marginal log likelihood values

were have very close implying that the model’s selected were good models to

use for comparison of the various filter techniques but still only models A to

D can be effective models to use to model realised volatility as the marginal

log-likelihood are close and fall within a range of -2500 to -2600. Further

comparison of the marginal log-likelihood suggests that model B and model

C for the case where both observation and state transition are normally

distributed performs best as they had the largest marginal log-likelihood

values. The student t-distributed observation errors do not perform as well

as the normally distributed errors hence the distribution of realised volatility

is not significantly affected by the tails of the distribution.

Henceforth model B and model C for the normally distributed observa-

tion errors suggests that the realised volatility follows an autoregressive of

order one process so realised volatility has dependence on the lag of inte-

grated variation and possibly measurement noise is a problem that we need

to account for in modelling realised volatility to further reduce the amount

of models we consider the sequential bayes factor of the two models this will

be found in the next section.

According to the dataset of bipower volatility of the S&P 500 we expect

should expect the results from realised variation to also hold here since they

behave similarly in the time series. Similar to the realised volatility it seems

all models can be effective models to predict the latent volatility state as

the marginal log likelihood value are relatively close but closer examination

says that the preferred models are models C and B as they have the highest

marginal log-likelihood values.

41

Page 43: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

When the model is non-linear such as for models C, D, E and F for filtering

both realised volatility and bipower volatility the Markov chain Monte Carlo

method performs not as well compared to the particle filter. For the same

number of iterations/particles we see the benefit of the importance weights

42

Page 44: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

Model Marginal Log-Likelihood RankingLW-A -2523.520 7PL-A -2474.048 5

MCMC-A -2648.940 12LW-A t -5042.359 30PL-A t -3741.110 27LW-B -2473.803 4PL-B -2456.685 2

MCMC-B -2439.831 1LW-B t -3653.631 26PL-B t -3585.185 25LW-C -2485.466 6PL-C -2468.032 3

MCMC-C -2962.528 23LW-C t -4015.245 29PL-C t -3987.007 28LW-D -2547.847 9PL-D -2546.481 8

MCMC-D -2877.018 16LW-D t -2948.965 21PL-D t -2950.044 22LW-E -2604.841 10PL-E -2611.793 11

MCMC-E -2977.741 24LW-E t -2905.482 18PL-E t -2921.141 19LW-F -2728.858 14PL-F -2713.701 13

MCMC-F -2939.352 20LW-F t -2899.811 17PL-F t -2870.488 15

Table 3: Marginal Log Likelihood for Realised Volatility

43

Page 45: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

Model Marginal Log-Likelihood RankingLW-A -2592.729 7PL-A -2538.630 5

MCMC-A -2638.574 10LW-A t -5033.072 30PL-A t -3634.138 28LW-B -2520.915 4PL-B -2512.838 3

MCMC-B -2502.731 1LW-B t -3719.315 29PL-B t -3591.816 27LW-C -2560.902 6PL-C -2509.616 2

MCMC-C -2640.777 11LW-C t -3005.172 24PL-C t -3116.110 14LW-D -2601.530 9PL-D -2594.679 8

MCMC-D -2703.430 12LW-D t -2964.297 23PL-D t -2866.141 18LW-E -2767.562 15PL-E -2733.110 13

MCMC-E -2844.013 17LW-E t -2931.104 21PL-E t -2900.114 20LW-F -2835.598 16PL-F -2933.983 22

MCMC-F -3102.103 24LW-F t -2889.599 19PL-F t -2744.111 14

Table 4: Marginal Log Likelihood for Bipower Volatility

44

Page 46: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

Model Marginal Log-Likelihood RankingLW-A -3182.191 12PL-A -2612.673 5

MCMC-A -2574.111 4LW-A t -3274.474 16PL-A t -2848.225 6LW-B -2398.484 3PL-B -1167.031 2

MCMC-B -899.107 1LW-B t -3338.770 18PL-B t -3294.516 17LW-C -4015.245 21PL-C -3353.274 19

MCMC-C -3221.017 14LW-C t -3273.411 15PL-C t -3073.110 8LW-D -6896.688 24PL-D -5033.670 22

MCMC-D -5433.774 23LW-D t -3122.523 11PL-D t -3014.507 7LW-E -8009.858 27PL-E -7539.411 25

MCMC-E -7644.110 26LW-E t -3415.092 20PL-E t -3211.144 13LW-F -9517.800 30PL-F -8752.774 28

MCMC-F -9258.114 29LW-F t -3081.448 9PL-F t -3101.152 10

Table 5: Marginal Log Likelihood for Option Implied Volatility

45

Page 47: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

10.2 Sequential Bayes Factor

The sequential bayes factors allows us to compare two different models for

each time step by looking at the ratio of fit/likelihood between the two differ-

ent models, it is an analysis tool for looking at predictive power of a particular

model over another. Since we have eliminated most of the models from the

marginal log-likelihoods however the likelihood values are close together so

then we can make the best judgment on the forecast model. It is one of the

few benefits of the sequential monte carlo/particle filter method as we can

see the fit of model over the observation periods hence we can determine

if one model performs better in different periods of the observation dataset

as such the results from the particle learning method would be used for the

sequential bayes factor.

To obtain the sequential bayes factor we first need the marginal likelihood

of each model in

p(yn+1 | yn,model1) =1

N

N∑i=1

p(yn+1 | θ(i), x(i)n )

then we need to create the ratio to obtain the results:

Sequential Bayes Factor =p(yn+1 | model1)

p(yn+1 | model2)

To determine which model is better at how the graph behaves, so when

the graph is rising or greater than one it suggests that during the time period

model one is performing better than model two.

Realised Volatility

Figure 3, contains the sequential bayes factor for model B over model C,

since the line is greater than one it suggests that model b predictive ability

46

Page 48: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

Time

SB

F

0 500 1000 1500 2000 2500 3000

1.0

21

.08

Sequential Bayes Factor for Realised Volatility

Figure 4: Sequential Bayes Factor

is better than model C for almost all time periods.

Bipower Volatility

Figure 4, contains the sequential bayes factor for model B over C, we see an

Time

SB

F

0 500 1000 1500 2000 2500 3000

0.9

81

.02

1.0

6

Sequential Bayes Factor for Bipower Volatility

Figure 5: Sequential Bayes Factor

upward sloping line which suggests that model B is slowly performing better

than model C. After 1000 observations we see that the the sequential bayes

factor value is always greater than one hence we can say that model B has

better predictive power over model C.

Option Implied Volatility

Figure 5 is the sequential bayes factor of model B over model A, it shows an

upward sloping line which is always greater than one implying that model B

47

Page 49: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

Time

SB

F

0 500 1000 1500 2000 2500 3000

0.5

1.5

2.5

Sequential Bayes Factor for Option Implied Volatility

Figure 6: Sequential bayes factor

has better predictive power than model A.

10.3 Parameter Estimation

The models shown below are the posterior estimates for preferred models

which is model B for the three difference variance measures and the three

different filter methods. The first row is the posterior median and the second

row is the confidence interval (2.5th quantile and the 97.5th quantile).

Liu and West Particle Learning MCMCα 0.161 0.265 0.203

(0.124,0.213) (0.220,0.320) (0.144,0.263)β 0.972 0.943 0.957

(0.967,0.979) (0.929,0.953) (0.945,0.970)τ 0.048 0.071 0.053

(0.045,0.051) (0.068,0.075) (0.044,0.064)σ 0.201 0.187 0.182

(0.188,0.214) (0.176,0.197) (0.168,0.196)

Table 6: Realised volatility: Posterior estimates of parameters

Both parameter estimation of the particle learning and MCMC posterior

parameter results are closer than the Liu and West filter. At the same time

48

Page 50: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

Time

Vt

0 500 1000 1500 2000 2500 3000

35

7

RVb−V

Figure 7: Liu and West: Latent State of Realised Volatility Model B

Time

Vt

0 500 1000 1500 2000 2500 3000

35

7

RVb−V

Figure 8: Particle Learning: Latent State of Realised Volatility Model B

the confidence interval of the Liu and west filter is much larger than both

the particle learning and the markov chain monte carlo methods for model

B.

However for both the bipower and option implied volatility, the Liu and

West filter obtains parameter estimates closer to the Markov chain Monte

Carlo than the particle learning filter. As a result it is inconclusive of which

method produces the most consistent results but for most parameter posterior

estimates the confidence intervals over laps on another or are close to one

another.

49

Page 51: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

Time

Vt

0 500 1000 1500 2000 2500 3000

35

7

RVb−V

Figure 9: MCMC: Latent State of Realised Volatility Model B

Liu and West Particle Learning MCMCα 0.239 0.297 0.214

(0.200,0.277) (0.231,0.345) (0.153,0.276)β 0.949 0.927 0.954

(0.941,0.956) (0.914,0.942) (0.941,0.967)τ 0.059 0.09 0.06

(0.056,0.063) (0.081,0.098) (0.050,0.071)σ 0.171 0.182 0.184

(0.162,0.179) (0.172,0.194) (0.170,0.200)

Table 7: Bipower volatility: Posterior estimates of parameters

Time

Vt

0 500 1000 1500 2000 2500 3000

24

68

BVb−V

Figure 10: Liu and West: Latent State of Bipower Volatility Model B

50

Page 52: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

Time

Vt

0 500 1000 1500 2000 2500 3000

24

68

BVb−V

Figure 11: Particle Learning: Latent State of Realised Volatility Model B

Time

Vt

0 500 1000 1500 2000 2500 3000

35

7

BVb−V

Figure 12: MCMC: Latent State of Bipower Volatility Model B

Liu and West Particle Learning MCMCα 0.231 0.312 0.245

(0.150,0.327) (0.231,0.387) (0.162,0.323)β 0.945 0.942 0.959

(0.832,1.07) (0.929,0.954) (0.946,0.973)τ 0.066 0.055 0.058

(0.053,0.075) (0.053,0.058) (0.055,0.061)σ 0.04 0.083 0.052

(0.031,0.051) (0.081,0.087) (0.0486,0.055)

Table 8: Option Implied volatility: Posterior estimates of parameters

51

Page 53: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

Time

Vt

0 500 1000 1500 2000 2500 3000

46

8

VIXb−V

Figure 13: Liu and West: Latent State of Option Volatility Model B

Time

Vt

0 500 1000 1500 2000 2500 3000

56

78

VIXb−V

Figure 14: Particle Learning: Latent State of Realised Volatility Model B

Time

Vt

0 500 1000 1500 2000 2500 3000

4.5

6.0

7.5

VIXb−V

Figure 15: MCMC: Latent State of Option Volatility Model B

52

Page 54: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

10.4 Forecast

According to our results from the marginal log likelihood comparisons the

models we would use to forecast . The priors for the forecast are the pos-

terior distribution estimated by the first 3000 observations of the S&P500

variance measures. In the table below are the average absolute deviations

and squared deviations of the predicted volatility measures between the pe-

riod 13th August 2008 to 31st December 2008. The forecasted values are

generated from particle learning draws for the next business days as a result

it is a probabilistic forecast of realised, bipower and option implied volatility.

Models Average Absolute Deviation Average Squared DeviationRealised volatility 0.1140 0.0372Bipower volatility 0.1109 0.0363

Option implied volatility 0.1237 0.0354

Table 9: Forecast deviation measures

Time

Ann

ualis

ed V

aria

nce

0 20 40 60 80 100

0.0

0.4

0.8

1.2

Probabilistic Forecast of RV

Figure 16: Probabilistic Forecast of Realised Volatility

A probabilistic forecast is a better method to forecast real world events

as suggested by Gneiting (2007) as nothing is for certain. The probabilistic

forecast where the upper solid line is the 97.5th percentile, the lower solid

line is the 2.5th percentile and the middle solid line is the median of the

predicted of the predicted observation states. During the volatile period of

the 2007 Financial crisis, we can see that the forecast range is large over this

period where there was a range from 1.2 to 0.2 at the 45th period which

53

Page 55: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

Time

Ann

ualis

ed V

aria

nce

0 20 40 60 80 100

0.0

0.4

0.8

1.2

Probabilistic Forecast of BV

Figure 17: Probabilistic Forecast of Bipower Volatility

Time

Ann

ualis

ed V

aria

nce

0 20 40 60 80 100

0.0

0.4

0.8

1.2

Probabilistic Forecast of VIX

Figure 18: Probabilistic Forecast of Option Implied Volatility

is two days after the 10th October 2008 when the news of a market crash

caused panic securities selling causing the realised and bipower volatility to

rise to 1.03 on the 43th period .

The probabilistic forecast of the option volatility covers for all the real

observations, there are no cases where the option volatility jumps out of the

forecast range which implies we do have a good model. We can see that

from the first 30 observations the market was still stable however leading up

to more bad news of the global market the range of forecast grew and by

October the 35 th observation the market continued to stay volatile which is

depicted in the predicated forecast range.

54

Page 56: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

However all our forecast contained a large probabilistic range which is

not a desirable attribute as it suggest inaccuracy in the forecast model used.

As a result there is a possibility of a better model that can improve forecast

accuracy (discussion in Robustness section).

55

Page 57: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

11 Robustness and Analysis

To determine the best model to forecast from we applied three Bayesian

filter methods to recursively solve for integrated variation (latent state). By

making our conclusion from multiple Bayesian filter methods we would be

able to pick the best model that fits our data to perform a consistent and

efficient forecast of volatility, however there can be multiple parts in the

filtering process which can affect our conclusions such as the priors, number

of iterations or models.

11.1 Prior Sensitivity

As stated in previous section our model selection analysis tools were the

marginal log-likelihoods and the sequential bayes factor however it does not

necessary suggest that our results are valid. Since we are comparing the

marginal log-likelihoods to determine the best model the results are highly

dependent on the priors distributions used for the initial estimated latent

state and parameter distribution, as a result the priors can affect our judg-

ment on the best model to use. Henceforth, the table below is the marginal

log-likelihoods for the our two most preferred models for each variance mea-

sure when we double the variance of the priors and the parameter posterior

results.

From table 10, it shows that by doubling the variance or scale parameters

for the both the parameter and initial latent state priors we lose some fit

according to the marginal log-likelihood values as for all cases they have

fallen. Though not all models are shown in the table above the ranking for

the top two preferred models with the original priors and new priors were the

same. However it still suggests that model B is the preferred model as it has

the highest marginal log-likelihood. Both the realised volatility and bipower

volatility did not change much in the marginal log-likelihood compared to

56

Page 58: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

Particle Learning Marginal Log-Likelihood valuesModel Original Priors Doubling variance/scale prior parameter

Realised volatilityModel B -2456.685 -2513.594Model C -2468.032 -2673.339

Bipower volatilityModel B -2512.838 -2578.327Model C -2509.616 -2710.641

Option Implied VolatilityModel A -2612.673 -3642.057Model B -1167.031 -2060.145

Table 10: Marginal Likelihood sensitivity to changes in Prior

the option implied volatility as the values for marginal log-likelihoods almost

doubled by doubling variance/scale prior distribution.

11.2 Prior distribution difficulty for Option Implied

Volatility

The option implied volatility as shown in table 10 can be affected significantly

by the prior distributions used unlike the realised and bipower volatility

which are relatively stable compared to option implied volatility. Similar to

the exercise performed in the Robustness: Multiple filter method section,

if we reduced the variance and scale priors for the option volatility by half

we would get positive marginal log-likelihood values which suggests model

misspecification as the model is very sensitive to our prior selections.

From the S&P 500 data series we see that both realised and bipower

volatility are identical however the option implied volatility has a lagged

reaction to changes in realised and bipower volatility. We also noted in the

data discussion section that the option volatility is relatively more smooth

compared to the realised and bipower volatility this can imply that lags of the

latent states have an important impact to the option volatility. The desired

57

Page 59: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

model could have been a p-th order autoregressive model or a GARCH model

as they can take into account for heteroskedasticity in the model and as well

as take into account for lags of the latent state. This could be used for future

research into the modelling of option implied volatility.

11.3 Number of iterations

Time efficiency is an advantage of particle filters over markov chain monte

carlo methods as by performing the particle filter method we used 5000 par-

ticles (though more particles would lead to more consistent results memory

limitations prevented usage of over 5000 particles) for the particle learning

models and the Liu and West filter method for 3000 observations and both

particle filter methods take approximately 2-3 minutes to complete. How-

ever with the markov chain monte carlo method for 8000 iterations (3000

iterations are used as burn-in) for the same observations it will take approxi-

mately an hour to complete. All simulations were performed using a 3.1Ghz

Intel Core i3 CPU with 4gb of RAM for memory. We believe that to be able

to compare the consistency of our results we applied multiple filter methods

and by keeping the number of iterations and particles the same it will be a

reliable approach for comparison between all filter methods, as by comparing

the marginal log-likelihoods we can see how the same number of iterations

affects the convergence of the filter methods. We can conclude that for non-

linear models the Markov chain Monte Carlo method converges slower than

the particle filter methods but for linear models all three methods.

11.4 Models

In this section we would be discussing about the realised and bipower volatil-

ity models, there are a variety of models investigated however there can lim-

itations since the models we looked at do not have long memory properties

58

Page 60: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

that Engle suggested. Though our model did not account for some of the

stylised facts of volatility we can see from our results that more complicated

non-linear models did not perform as well compared to the simple local level

model and the autoregressive of order one model perform slightly better than

the local level model. This is helpful as it implies that the two variance mea-

sures are good non-parametric estimates of the true volatility (integrated

variance) which is consistent to Anderson et al(2007) results.

From the S&P500 index we observe that the series is very volatile however

from our marginal log-likelihood for the first 3000 observations are relatively

low at around -2500 for majority of models and methods modelling realised

volatility and bipower volatility. Consequently these two arguments implies

that the models investigated are a valid selection of models however further

improvements can be looked at such as the SV-GARCH models found in

Pitt(2010) which captures most of the stylised facts of volatility presented

by Engle.

59

Page 61: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

12 Conclusion

In this paper we have reviewed three bayesian filter methods: the Liu and

West filter, the particle learning filter and the forward filtering backward

sampling algorithm. The particle filter performs more efficiently than the

markov chain monte carlo method to recursively solve a dynamic model as

reported in robustness section the particle filter is more time efficient than the

markov chain monte carlo method. The Markov chain Monte Carlo method

has a faster convergence (in terms of iterations) than the particle filter(s) if

the model is relatively simple and linear on the other hand if the model is

non-linear and begins to become more complicated the particle filter has its

benefits of speed and general practabiliity to be adapted to another problem.

Current research in bayesian filtering with stochastic volatility models

suggest to investigate stochastic volatility model via particle filter approach

is to determine fat tailed distributions in stochastic volatility, where it is

possible to at the same time to determine the degrees of freedom of the t-

dstribution, estimate the parameters and solve the system. A few other aca-

demics such as Hedibert Lopes and Carlos Carvalho (University of Chicago)

are researching on multivariate stochastic volatility models and how to in-

corporate the markov switching process into particle filtering. Literature in

the particle filter field is mainly on parameter estimation or a new particle

filter algorithm that solves a problem with the previous particle filter such

as particle degeneracy problems was fixed with the auxiliary particle filter

(resample-sample filter). Currently the authors of the particle learning al-

gorithm (Lopes, Carvalho, Johannes and Polson) are working on ways to

extend and reduce the Monte Carlo errors and sampling improverishments

that particle learning has.

Particle filters had been developed in 1993 by Gordon, Salmond and Smith

primary for solving mathematical problems as a result has become extremely

popular in engineering and machine learning. It has taken many years for

academics in the economics field to start applying the particle filter The

60

Page 62: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

literature field in bayesian filtering in econometrics is vast with academics

looking into incorporating particle filters in solving macroeconomic problems

via dynamic stochastic general equilibrium models such as Flury and Shep-

hard (2008) and Fernandez-Villaverde and Rubio-Ramirez (2007) or solving

stochastic volatility models in finance. There is a bright future in researching

in Bayesian filtering as there are still new particle filter algorithms to develop

and new applications of the particle filter.

61

Page 63: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

13 Appendix

13.1 Prior Specifications

The priors used were not determined by previous sampling of the series, they

are all just guesses that are approximately the size and magnitude the param-

eters should be. Note that both the alpha and beta are normal distributed

and both measurement and observation errors are inverse gamma distributed

as it keeps all errors to be positive. Since we stated that the in the case of

normally distributed observation errors we assumed that the initial latent

variable is also normally distributed.

Realised Volatility (RV)

Alpha∼ Normal(0.1,0.05)

Beta∼ Normal(1,0.1)

Tau∼ Inverse Gamma(20,2.2)

Sigma∼ Inverse Gamma(20,1.1)

x0 ∼ Normal(4,2)

Bipower Volatility (BV)

Alpha∼ Normal(0.1,0.05)

Beta∼ Normal(1,0.1)

Tau∼ Inverse Gamma(20,2.2)

Sigma∼ Inverse Gamma(20,1.1)

x0 ∼ Normal(4,2)

Option Implied Volatility (VIX)

Alpha∼ Normal (0.1,0.05)

Beta∼ Normal(1,0.1)

Tau ∼ Inverse Gamma(400,220)

Sigma ∼Inverse Gamma(400,220)

x0 ∼ Normal(4,2)

62

Page 64: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

13.2 Particle Learning Filter

Time

α

0 500 1500 2500

−0

.50

.00

.51

.0

RVb−alpha

Time

β

0 500 1500 2500

0.7

0.8

0.9

1.0

1.1

1.2

RVb−beta

Timeτ

0 500 1500 2500

0.1

00

.15

0.2

00

.25

RVb−tau

Time

σ

0 500 1500 2500

0.2

0.3

0.4

0.5

0.6

0.7

RVb−sigma

Figure 19: Particle Learning: Parameter history of Realised Volatility ModelB

Time

α

0 500 1500 2500

−0

.50

.00

.51

.0

BVb−alpha

Time

β

0 500 1500 2500

0.6

0.7

0.8

0.9

1.0

1.1

1.2

BVb−beta

Time

τ

0 500 1500 2500

0.1

00

.15

0.2

00

.25

0.3

0

BVb−tau

Time

σ

0 500 1500 2500

0.2

0.3

0.4

0.5

0.6

0.7

BVb−sigma

Figure 20: Particle Learning: Parameter history of Bipower Volatility ModelB

63

Page 65: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

Time

α

0 500 1500 2500

−0

.50

.00

.51

.01

.5

VIXb−alpha

Time

β

0 500 1500 2500

0.8

0.9

1.0

1.1

1.2

1.3

VIXb−beta

Time

τ

0 500 1500 2500

0.0

60

.08

0.1

0

VIXb−tau

Time

σ

0 500 1500 2500

0.0

1.0

2.0

3.0

VIXb−sigma

Figure 21: Particle Learning: Parameter history of Option Volatility ModelB

64

Page 66: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

13.3 Liu and West Filter

Time

α

0 500 1500 2500

0.0

00

.05

0.1

00

.15

0.2

0

RVb−alpha

Time

β

0 500 1500 2500

0.8

0.9

1.0

1.1

RVb−beta

Timeτ

0 500 1500 2500

0.0

30

.05

0.0

70

.09

RVb−tau

Time

σ

0 500 1500 2500

0.1

00

.20

0.3

0

RVb−sigma

Figure 22: Liu and West: Parameter history of Realised Volatility Model B

Time

α

0 500 1500 2500

0.0

0.1

0.2

0.3

BVb−alpha

Time

β

0 500 1500 2500

0.8

0.9

1.0

1.1

BVb−beta

Time

τ

0 500 1500 2500

0.0

40

.08

0.1

2

BVb−tau

Time

σ

0 500 1500 2500

0.1

00

.15

0.2

00

.25

BVb−sigma

Figure 23: Liu and West: Parameter history of Bipower Volatility Model B

65

Page 67: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

Time

α

0 500 1500 2500

−0

.6−

0.4

−0

.20

.00

.2

VIXb−alpha

Time

β

0 500 1500 2500

0.8

0.9

1.0

1.1

VIXb−beta

Time

τ

0 500 1500 2500

0.4

0.6

0.8

1.0

VIXb−tau

Time

σ

0 500 1500 2500

0.4

0.6

0.8

1.0

VIXb−sigma

Figure 24: Liu and West: Parameter history of Option Volatility Model B

66

Page 68: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

References

[1] Ait Sahalia, Y., and Kimmel, R. Maximum likelihood estimationof stochastic volatility models. Journal of Financial Econometrics .

[2] Anderson, T., Bollerslev, T., and Diebold, F. Roughing itup: Including jump components in the measurement, modelling andforecasting of return volatility.

[3] Andrieu, C., Doucet, A., and Holenstein, R. Particle markovchain monte carlo methods. Journal of the Royal Statistical Society:Series B (Statistical Methodology) 72, 3 (2010), 269–342.

[4] Barndorff-Nielsen, O., and Shephard, N. Power and bipowervariation with stochastic volatility and jumps. Journal of FinancialEconometrics 2, 1 (2004), 1–37.

[5] BBC News. Fear grips global stock market, Oct. 2008.

[6] Brownlees, C., and Gallo, G. Financial econometric analysis atulta-high frequency; data handling concerns. Computational statisticsand Data analysis 51 (2006).

[7] Carlin, B., Polson, N., and Stoffer, D. A monte carlo approachto nonnormal and nonlinear state-space modeling. Journal of the Amer-ican Statistical Association (1992), 493–500.

[8] Carter, C., and Kohn, R. On gibbs sampling for state space models.Biometrika 81, 3 (1994), 541–553.

[9] Casella, G., and George, E. Explaining the gibbs sampler. Amer-ican Statistician (1992), 167–174.

[10] Christoffersen, P., Jacobs, K., and Mimouni, K. Volatility dy-namics for the s&p500: Evidence from realized volatility, daily returns,and option prices. Review of Financial Studies 23, 8 (2010), 3141–3189.

[11] Corsi, F., Mittnik, S., Pigorsch, C., and Pigorsch, U. Thevolatility of realized volatility. Econometric Reviews 27 (2008), 46–78.

[12] Engle, R., and Patton, A. What good is a volatility model? NYUworking paper (2001).

67

Page 69: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

[13] Eraker, B., Johannes, M., and Polson, N. The impact of jumpsin volatility and returns. The Journal of Finance 58, 3 (2003), 1269–1300.

[14] Fruhwirth-Schnatter, S. Applied state space modelling of non-gaussian time series using integration-based kalman filtering. Statisticsand Computing 4, 4 (1994), 259–269.

[15] Geweke, J., and Tanizaki, H. Bayesian estimation of state-spacemodels using the metropolis–hastings algorithm within gibbs sampling.Computational statistics & data analysis 37, 2 (2001), 151–170.

[16] Gordon, N.J, S. D., and Smith, A. Novel approach tononlinear/non-gaussian bayesian state estimation. In Radar and Sig-nal Processing, IEE Proceedings F (1993), vol. 140, IET, pp. 107–113.

[17] Harvey, A., Ruiz, E., and Shephard, N. The Review of EconomicStudies .

[18] Hore, S., Johannes, M., Lopes, H., McCulloch, R., and Pol-son, N. Bayesian computation in finance. Frontiers of Statistical Deci-sion Making and Bayesian Analysis (2010).

[19] Jacquier, E., Polson, N., and Rossi, P. Bayesian analysis ofstochastic volatility models with fat-tails and correlated errors. Journalof Econometrics 122, 1 (2004), 185–212.

[20] Jacquier, S., Polson, N., and Rossi, P. Bayesian analysis ofstochastic volatility models. Journal of Business and Economic Statis-tics 12, 4.

[21] Johannes, M., and Polson, N. Mcmc methods for financial econo-metrics. working paper (2002).

[22] Kalman, R. A new approach to linear filtering and prediction prob-lems. Journal of basic Engineering 82, 1 (1960), 35–45.

[23] Kantas, N., Doucet, A., Singh, S., and Maciejowski, J. Anoverview of sequential monte carlo methods for parameter estimationin general state-space models. In Proc. IFAC Symposium on SystemIdentification (SYSID) (2009).

[24] Kim, S., Shephard, N., and Chib, S. Stochastic volatility: likeli-hood inference and comparison with arch models. The Review of Eco-nomic Studies 65, 3 (1998), 361–393.

68

Page 70: Bayesian Filtering on Realised, Bipower and Option … · University of New South Wales Bayesian Filtering on Realised, Bipower and Option Implied Volatility Honours Student: Nelson

[25] Lopes, H., and Tsay, R. Particle filters and bayesian inference infinancial econometrics. Journal of Forecasting 30 (2011), 168–209.

[26] Malik, S., and Pitt, M. Modelling stochastic volatility with lever-age and jumps: A simulated maximum likelihood approach via particlefiltering. Available at SSRN 1763783 (2011).

[27] Maneesoonthorn, W., Martin, G., Forbes, C., and Grose, S.Probabilistic forecast of volatility and its risk premia. Working Paper:forthcoming Journal of Econometrics (2012).

[28] McAleer, M., and Medeiros, M. Realized volatility: A review.Econometric Reviews 27:1-3 (2008), 10–45.

[29] Nakajima, J. Bayesian analysis of generalized autoregressive condi-tional heteroskedasticity and stochastic volatility: Modeling leverage,jumps and heavy-tails for financial time series*. Japanese EconomicReview 63, 1 (2011), 81–103.

[30] Pitt, M., and Shephard, N. Filtering via simulation: Auxiliaryparticle filters. Journal of the American statistical association 94, 446(1999), 590–599.

[31] Taylor, S. Modeling stochastic volatility: A review and comparativestudy. Mathematical Finance 4, 2 (1994), 183–204.

[32] Welch, G., and Bishop, G. An introduction to the kalman filter.University of North Carolina at Chapel Hill, Chapel Hill, NC 7, 1 (1995).

69