mei_nonstationary_forecasting.ppt

22
Unit Roots & Forecasting Methods of Economic Investigation Lecture 20

Upload: ab

Post on 11-Nov-2015

214 views

Category:

Documents


0 download

TRANSCRIPT

  • Unit Roots & ForecastingMethods of Economic InvestigationLecture 20

  • Last TimeDescriptive Time SeriesProcessesEstimating with exogenous serial correlationEstimating with endogenous processes

  • Todays ClassNon-stationaryTime SeriesUnit Roots and Spurious RegressionsOrders of Integration

    Returning to Causal EffectsImpulse Response FunctionsForecasting

  • Random Walk ProcessesDefinition: Et[xt+1] = xt that is todays value of X is the best predictor of tomorrows value. This looks very similar to our AR(1) processes, where = 1. Autocovariances of a random walk are not well defined in a technical sense, but imagine AR(1) process with 1: we have nearly perfect autocorrelation for any two time periods. persistence dies out too slowly so most of variance will largely be due to very low-frequency shocks.

  • Permanence of Shocks in Unit RootAn innovation (a shock at t ) to a stationary AR process dies out eventually (the autocorrelation function declines eventually to zero).A shock to a random walk is permanent

    Variance is increasing over time Var(xt) = Var(x0) + t2

  • Drifts and TrendsDeterministic trendyt = t + xt + txt is some stationary processyt is trend stationary

    Its easy to add a deterministic trend to a random walk

  • Orders of IntegrationA series is integrated of order p if a p differences render it stationary. If a time series is integrated and differencing once renders the time series stationary, then it is integrated of order 1 or I(1).If it is necessary to difference twice before a time series is stationary, then it is I(2), and so forth.

  • Integrated SeriesIf a time series has a unit root, it is said to be integrated.First differencing the time series removes the unit root. E.g. in the case of a random walkyt = yt-1 + ut, ut ~ N(0, 2)yt = utthe first difference is white noise, which is stationary.

    For an AR(p) a unit root implies1 1L 2L2 ... pLp = (1 L) (1 1L 2L2 ... pLp-1) = 0and as a result first differencing also removes the unit root.

  • Non-stationarityNonstationarity can have important consequences for regression modelsand inference. Autoregressive coefficients are biasedt-statistics have non-normal distributions even in large samplesSpurious regression

  • Problem: Spurious Regressionimagine we now have two series are generated by independent random walks,

    Suppose we run yt on xt using OLS, that is we estimate yt = + xt + t.

    In this case, you tend to see significant because the low-frequency changes make it seem as if the two series are in some way associated.

  • Unit Root TestsStandard Dickey-Fuller test appropriate for AR(1) processesMany economic and financial time series have a more complicated dynamic structure than is captured by a simple AR(1) model.Said and Dickey (1984) augment the basic autoregressive unit root test to accommodate general ARMA(p, q) models with unknown orders andCalled the augmented Dickey-Fuller (ADF) test

  • ADF Test 1 The ADF test tests the null hypothesis that a time series yt is I(1) against the alternative that it is I(0), assuming that the dynamics in the data have an ARMA structure. The ADF test is based on estimating the test regression

    Deterministic variablesPotential unit rootOther serial correlation

  • ADF Test - 2To see why:

    Subtract yt-1 from both sides and define = (1+ 2++ p 1) and we get

    Test = 0 against alternative

  • Estimating in Time SeriesNon-stationary time series can lead to a lot of problems in econometric analysis. In order to work with time series, particular in regression models, we should therefore transform our variables to stationary time series first.First differencing removes unit roots or trends. Hence, difference a time series until it is I(0). Differencing too often is less of a problem since a differenced stationary series is still stationary.Regressions of one stationary variable on another is less problematic.

    Although observations may not be independent, we can expect regression to have similar properties as with cross sectional data.

  • Impulse Response FunctionOne of the most interesting things to do with an ARMA model is form predictions of the variable given its past. we want to know what is Et(xt+j )Can do inference with Vart(xt+j)

    The impulse response function is a simpel way to do thatFollow te path that x follows if it is kicked by unit shock characterization of the behavior of our models. allows us to start thinking about causes and effects.

  • Impulse Response and MA()1. The MA() representation is the same thing as the impulse response function.i.e.

    The easiest way to calculate an MA() representation is to simulate the impulse-response function.

    The impulse response function is the same as Et(xt+j) Et1(xt+j).

  • Causality and Impulse ResponseCan either forecast or simulate the effect of a given shockTry to pick a shock time/level to simulate and try to replicate observed dataIssue of whether that shock is what really happened Know a shock happened in time time tSee if observed change (more on this next time)

    Granger causality implies a correlation between the current value of one variable and the past values of othersit does not necessarily imply that changes in one variable causes changes in another. Use a F-test to jointly test for the significance of the lags on the explanatory variables, this in effect tests for Granger causality between these variables. Can visually see correlation in impulse response functions

  • Source: Cochrane, QJE (1994)

  • Next TimeEstimating Causality in Time SeriesSome additional forecasting stuffTesting for breaksRegression discontinuity/Event study

    *