a regression model for ensemble forecasts david unger climate prediction center

48
A Regression Model for Ensemble Forecasts David Unger Climate Prediction Center

Upload: roger-gilmore

Post on 22-Dec-2015

219 views

Category:

Documents


0 download

TRANSCRIPT

A Regression Model for Ensemble Forecasts

David UngerClimate Prediction Center

Summary

• A linear regression model can be designed specifically for ensemble prediction systems.

• It is best applied to direct model forecasts of the element in question.

• Ensemble regression is easy to implement and calibrate.

• This talk will summarize how it works

Ensemble Forecasting

The ensemble forecasting approach is based on the following beliefs: 1) Individual solutions represent possible outcomes. 2) Each ensemble member is equally likely to best represent the observation.3) The ensemble set behaves as a randomly selected sample from the expected distribution of observations.

6-10 day Mean 500-hpa hts.

THEORY

Conventions

The Ensemble Regression ModelAssumptions

Forecasts

Obs

erva

tions

A Schematic Drawing of an Ensemble Regression Line.

Forecasts

Pote

ntial

Obs

erva

tions

Actual obs

20% chance20% chance

20% chance

20% chance

An individual case: 5 Potential solutions identified One actual observation (ovals). Four others that “could” happen. Red indicates best (closest) member.

Ensemble Regression Principal Assumptions

• Statistics gathered from the one actual obs• Math applied with the assumption that each

ensemble member could also be a solution.

How is it possible to derive?•

“Ensemble” Regression• Best Member

Regression Eq. same as for the Ensemble mean

Residual errors much smaller (usually)

What it means in English?

• Derive a regression equation relating the ensemble mean and the observation.

• Apply this equation to each individual member.• Apply an error estimate to each individual regression

corrected forecast • This looks a lot like the “Gaussian Kernel” approach. (Kernel Dressing)

Regression with error estimates applied

Derivation

• The regression is computed from similar “statistics” needed for standard linear regression with only two additional array elements related to the ensemble size and spread.

Multiple linear regression

• Theory (applying the ensemble mean equation to individual members) also applies to multiple linear regression PROVIDED all predictors are linear. (Inclusion of binary predictors, interactive predictors etc. will not be theoretically correct).

• Ensemble regression may be easier to apply to the MOS forecasts in a second step.

(Derive equations, apply them to get a series of forecasts, and do a second step processing of those forecasts)

CPC PRODUCTS BASED ON ENSEMBLE REGRESSION

NAEFS

• Combines GEFS and Canadian ensembles• Bias corrected by EMC (6-hourly)• 2 meter temperatures processed by CPC into

probability of above-near-below normal categories(5-day means)

NAEFS Kernel Density Example

Standardized Temperature (Z)

Prob

abili

ty D

ensi

ty

Long Lead ConsolidationNino 3.4 SST forecasts Seasonal Forecast Consolidation

NAEFS PERFORMANCE

6-10 Day Forecast Reliability 8-14 Day Forecast Reliability

NAEFS Performance

Official Forecast NAEFS Guidance

CALIBRATION

Climate Forecast System Version 2(CFSv2)

• 4 runs per day 1 every 6 hrs.• Lagged ensemble – Ensemble formed from

model forecasts from different initial times all valid for the same target period

• Hindcast data available only every 5th day from 1982-present.

• Example forecast from Jan 26, 2010.

Forecast Situation• El Nino conditions were observed in early 2010.• CFS was the first to warn of a La Nina

Calibration

• Most models have too little spread (overconfident). This is compensated for by wide kernels.

• If the mean ensemble spread is too large, adjustments must be made.

Spread Calibration

SST ( C )

Den

sity

Red – Regression on the ensemble mean. (Standard regression)Green line – Individual members Blue Combined envelop

CFSv2 Nino 3.4 K=.2

K=.4

K=.6

K=.8

Unaltered Ensemble Regression K=1.0

SST ( C )

Prob

abili

ty D

ensi

tyRed – Ensmble Mean

Blue – Kernel Env.

Green – Individual members

K=1.2

K=1.4

K=1.6 Near Max

OriginalFcst.

RegressionModifiedFcst.

Spread vs. Skill

Adjustments

An information tidbit

• Generate N values taken randomly from a Gaussian distributed variable. Label them as the ensemble forecasts. N < 20.

• Take another value randomly from that same distribution and label it the observation.

• Do an ensemble regression on it many cases (but not so many that R=0)

• Question: What happens?

Answer•

Maintains a fixed ratio (on the average)

Inflation

Unaltered Ensemble Regression K=1.0 Very Close to Maximum K for 4 a member ensemble.

SST ( C )

Prob

abili

ty D

ensi

ty

Red - Ensm

Blue – Kernel Env.

Green – Individual members

WEIGHTING OF ENSEMBLES

Weighting

Weighting (illustration)Two forecasts (Red = GFS hi-res ensemble mean standard regression error distribution) Blue = GFS ensembles. The “Best” forecast in this case is the one with the highest PDF

GEFS is more likelyto have the best member if Obs<26.8 C

GFS hi-resIs Better

Weighting (Continued)

• Group ensembles into sets of equal skill. (GEFS, Canadian ensembles, ECMWF ensembles, hi-res GFS, hi-res ECMWF etc)Pass 1) Calculate PDF’s separatelyPass 2) Choose highest PDF as best. Keep track of percentages. Pass 3) Enter WEIGHTED ensembles into an ensemble regression. Weights=P(Best)/N An adaptive regression can do this in real time.

Weighted Ensemble CFSv2Nino 3.4 SSTs – Lead 6-mo.

Ensemble Group 1 – Jan 26 2010 For August 2010 Wgt: .36

Ensemble Group 2 – Jan 21 2010 For August 2010 Wgt: .36Ensemble Group 4 – Jan 16 2010 For August 2010 Wgt: .28

Conclusion

• It is theoretically sound to derive an equation from the ensemble mean and apply it to individual members.

• An ensemble regression forecast together with its error estimates resembles Gaussian kernel smoothing except members are first processed by the ensemble mean-based regression equation.

• Additional control can be achieved by adjusting the spread (K-factor). This capability is required for the case where the ensemble spread is too high.

• Ensemble regression need not require equally weighted members, only that the probability that each member will be closest be estimated.

• Weighting coefficients can be derived from the PDFs from component models in relation to the observations.

• The system delivers reliable probabilistic forecasts that are competitive in skill with manual forecasts (better in reliability).