probabilistic prediction

67
Probabilistic Prediction Cliff Mass University of Washington

Upload: melodie-mckay

Post on 03-Jan-2016

47 views

Category:

Documents


1 download

DESCRIPTION

Probabilistic Prediction. Cliff Mass University of Washington. Uncertainty in Forecasting. Most numerical weather prediction (NWP) today and most forecast products reflect a deterministic approach. - PowerPoint PPT Presentation

TRANSCRIPT

Probabilistic Prediction

Cliff Mass

University of Washington

Uncertainty in Forecasting

• Most numerical weather prediction (NWP) today and most forecast products reflect a deterministic approach.

• This means that we do the best job we can for a single forecast and do not consider uncertainties in the model, initial conditions, or the very nature of the atmosphere.

• However, the uncertainties are usually very significant and information on such uncertainty can be very useful.

This is really ridiculous!

• The work of Lorenz (1963, 1965, 1968) demonstrated that the atmosphere is a chaotic system, in which small differences in the initialization, well within observational error, can have large impacts on the forecasts, particularly for longer forecasts.

• In a series of experiments he found that small errors in initial conditions can grow so that all deterministic forecast skill is lost at about two weeks.

A Fundamental Issue

Butterfly Effect: a small change at one place in a complex system can have large effects elsewhere

Not unlike a pinball game

Uncertainty Extends Beyond Initial Conditions

• Also uncertainty in our model physics.– such as microphysics and boundary layer

parameterizations.

• And further uncertainty produced by our numerical methods.

Probabilistic NWP• To deal with forecast uncertainty, Epstein (1969)

suggested stochastic-dynamic forecasting, in which forecast errors are explicitly considered during model integration.

• Essentially, uncertainty estimates are added to each term in the primitive equations.

• This stochastic method was not and still is not computationally practical.

Probabilistic-Ensemble Numerical Prediction (NWP)

• Another approach, ensemble prediction, was proposed by Leith (1974), who suggested that prediction centers run a collection (ensemble) of forecasts, each starting from a different initial state.

• The variations in the resulting forecasts could be used to estimate the uncertainty of the prediction.

• But even the ensemble approach was not possible at this time due to limited computer resources.

• Became practical in the late 1980s as computer power increased.

Ensemble Prediction

• Can use ensembles to estimate the probabilities that some weather feature will occur.

•The ensemble mean is more accurate on average than any individual ensemble member.

•Forecast skill of the ensemble mean is related to the spread of the ensembles

•When ensemble forecasts are similar, ensemble mean skill tend to be higher.•When forecasts differ greatly, ensemble mean forecast skill tends to be less.

11

T

The true state of the atmosphere exists as a single point in phase space that we never know exactly.

A point in phase space completely describes an instantaneous state of the atmosphere. (pres, temp, etc. at all points at one time.)

Nonlinear error growth and model deficiencies drive apart the forecast and true trajectories (i.e., Chaos Theory)

PHA

SE

SPACE

12hforecast 36h

forecast

24hforecast

48hforecast

T

48hobservation

T

T

T

12hobservation

36h observation

24h observation

An analysis produced to run an NWP model is somewhere in a cloud of likely states.

Any point in the cloud is equally likelyto be the truth.

Deterministic Forecasting

12

T

Ensemble Forecasting: Encompasses truth Reveals flow-dependent uncertainty Yields objective stochastic forecast

T

48h Forecast Region

(forecast PDF)

Analysis Region

(analysis PDF)

An ensemble of likely analyses leads to an ensemble of likely forecasts

Ensemble Forecasting, a Stochastic Approach

PHA

SE

SPACE

22 May 2003 1:30 PM General Examination Presentation

Probability Density Functions

• Usually we fit the distribution of ensemble members with a gaussian or other reasonably smooth theoretical distribution as a first step

A critical issue is the development of ensemble systems that create probabilistic guidance that is both reliable and sharp.

We Need to Create Probability Density Functions (PDFs) of Each Variable That have

These Characteristics

Elements of a Good Probability Forecast:

• Sharpness (also known as resolution) – The width of the predicted distribution should

be as small as possible.

Probability Density Function (PDF)for some forecast quantity

SharpLessSharp

Elements of a Good Probability Forecast

• Reliability (also known as calibration) – A probability forecast p, ought to verify with relative

frequency p.– Forecasts from climatology are reliable (by definition), so

calibration alone is not enough.

ReliabilityDiagram

17

Over many trials, record verification’s position (the “rank”) among the ordered EF members.

0 5 10 15 20

0.1

0.2

0.3

0 5 10 15 20

0.1

0.2

0.3

0 5 10 15 20

0.1

0.2

0.3

Over-Spread EFUnder-Spread EFReliable EF

0

0.1

0.2

0.3

1 2 3 4 5 6 7 8 9

Verification Rank

Pro

ba

bili

ty

0

0.1

0.2

0.3

1 2 3 4 5 6 7 8 9

Verification Rank

Pro

ba

bili

ty

0

0.1

0.2

0.3

1 2 3 4 5 6 7 8 9

Verification Rank

Pro

ba

bili

ty

Cumulative Precip. (mm)

Fre

qu

ency

EF PDF (curve) & 8 sample members (bars) True PDF (curve) & verification value (bar)

Verification Rank Histogram(a.k.a., Talagrand Diagram)-Another Measure of Reliability

Brier Score

M : number of fcst/obs pairs : forecast probability {0.0…1.0}oj : observation {0.0 = no, 1.0 = yes}

Continuous

BS = 0 for perfect forecastsBS = 1 for perfectly wrong forecasts

Brier Skill Score (BSS) directly examines reliability, resolution, and overall skill

Brier Skill Score

BSS = 1 for perfect forecastsBSS < 0 for forecasts worse than climo

clim

fcst

climperf

climfcst

BS

BS

BSBS

BSBSBSS

1

jep

Mop

MBS

1j

2jje

1

Brier Skill Score′

ADVANTAGES:1) No need for long-term climatology2) Can compute and visualize in reliability diagram

0 0climclimclim

fcstfcstfcst

uncresrel

uncresrelSBS

1

(reliability, rel) (resolution, res) (uncertainty, unc)

I : number of probability bins (normally 11)N : number of data pairs in the bin : binned forecast probability (0.0, 0.1,…1.0 for 11 bins)oi : observed relative frequency for bin io : sample climatology (total occurrences / total forecasts)

Decomposed Brier Score

by Discrete, Contiguous Bins

ooooNM

o'pNM

SBII

111

1i

2ii

1i

2iiei

i)( 'pe

Probabilistic Information Can Produce Substantial Economic and Public Protection Benefits

There is a decision theory on using probabilistic information

for economic savings

C= cost of protection

L= loss if a damaging event occurs

Decision theory says you should protect if the probability of

occurrence is greater than C/L

Critical Event: surface winds > 50kt

Cost (of protecting): $150K

Loss (if damage ): $1M

C/L = .15 (15%)

Hit

FalseAlarm

Miss

CorrectRejection

YES NO

YES

NO

Forecast?

Obs

erve

d?

Decision Theory Example

$150K $1000K

$150K $0K

Optimal Threshold = 15%

History of Probabilistic Weather Prediction (in the U.S.)

Early Forecasting Started Probabilistically!!!

• Early forecasters, faced with large gaps in their young science, understood the uncertain nature of the weather prediction process and were comfortable with a probabilistic approach to forecasting.

• Cleveland Abbe, who organized the first forecast group in the United States as part of the U.S. Signal Corp, did not use the term “forecast” for his first prediction in 1871, but rather used the term “probabilities,” resulting in him being known as “Old Probabilities” or “Old Probs” to the public.

“Ol Probs”•Professor Cleveland Abbe, issued the first public “Weather Synopsis and Probabilities” on February 19, 1871

•A few years later, the term indications was substituted for probabilities, and by 1889 the term forecasts received official approval(Murphy 1997).

History of Probabilistic Prediction

• The first modern operational probabilistic forecasts in the United States were produced in 1965. These forecasts, for the probability of precipitation, were produced by human weather forecasters and thus were subjective probabilistic predictions.

• The first objective probabilistic forecasts were produced as part of the Model Output Statistics (MOS) system that began in 1969.

NOTE: Model Output Statistics (MOS)

• Based on simple linear regression with 12 predictors.

• Y = a0 +a1X1 + a2X2 + a3X3 + a4X4 …

Ensemble Prediction• Ensemble prediction began an NCEP in the early 1990s.

ECMWF rapidly joined the club.• During the past decades the size and sophistication of

the NCEP and ECMWF ensemble systems have grown considerably, with the medium-range global ensemble system becoming an integral tool for many forecasters.

• Also during this period, NCEP has constructed a higher resolution, short-range ensemble system (SREF) that uses breeding to create initial condition variations.

Example: NCEP Global Ensemble System• Begun in 1993 with the MRF (now GFS)• First tried “lagged” ensembles as basis…using runs of various

initializations verifying at the same time.• Then used the “breeding” method to find perturbations to the initial

conditions of each ensemble members.• Breeding adds random perturbations to an initial state, let them

grow, then reduce amplitude down to a small level, lets them grow again, etc.

• Give an idea of what type of perturbations are growing rapidly in the period BEFORE the forecast.

• Does not include physics uncertainty.• Now replaced by Ensemble Transform Filter Approach

NCEP Global Ensemble

• 20 members at 00, 06, 12, and 18 UTC plus two control runs for each cycle

• 28 levels• T190 resolution (roughly 80km resolution)• 384 hours• Uses stochastic physics to get some physics

diversity

ECMWF Global Ensemble

• 50 members and 1 control

• 60 levels

• T399 (roughly 40 km) through 240 hours, T255 afterwards

• Singular vector approach to creating perturbations

• Stochastic physics

Several Nations Have Global Ensembles Too!

• China, Canada, Japan and others!

• And there are combinations of global ensembles like:– TIGGE: Thorpex Interative Grand Global

Ensemble from ten national NWP centers– NAEFS: North American Ensemble

Forecasting System combining U.S. and Canadian Global Ensembles

Popular Ensemble-Based Products

Spaghetti Diagram

Ensemble Mean

37

‘Ensemble Spread Chart

Global Forecast System (GFS) Ensemblehttp://www.cdc.noaa.gov/map/images/ens/ens.html

“best guess” = high-resolution control forecast or ensemble mean

ensemble spread = standard deviation of the members at each grid point

Shows where “best guess” can be trusted (i.e., areas of low or high predictability)

Details unpredictable aspects of waves: amplitude vs. phase

38

Current

Deterministic

Meteogram

Meteograms Versus “Plume Plots”

1000/500 Hpa Geopotential Thickness [m] at YokosukaInitial DTG 00Z 28 JAN 1999

0 1 2 3 4 5 6 7 8 9 10Forecast Day

5520

5460

5400

5340

5280

5220

5160

5100

5040

4980

FNMOC Ensemble Forecast System (EFS)https://www.fnmoc.navy.mil/efs/efs.html

Data Range = meteogram-type trace of each ensemble member’s raw output

Excellent tool for point forecasting, if calibrated Can easily (and should) calibrate for model bias Calibrating for ensemble spread problems is difficult

Must use box & whisker, or confidence interval plot for large ensembles

39

Box and Whisker Plots

http://www.weatheroffice.gc.ca/ensemble/index_naefs_e.html

40

http://www.weatheroffice.gc.ca/ensemble/index_naefs_e.html

41

0

5

10

15

20

25

30

35

40

45

50

Valid Time

Win

d S

pe

ed

(k

t) .

0

5

10

15

20

25

30

35

40

45

50

11/18 12/00 06 12 18 13/00 06 12 18 14/00 06 Valid Time (UTC)

Misawa AB, JapanMisawa AB, Japan

Win

d

Dir

ecti

on

AFWA Forecast MultimeteogramJME Cycle: 11Nov06, 18ZRWY: 100/280

15km Resolution

Win

d

Sp

eed

(k

t)

90%CI

Extreme Min

ExtremeMax

Mean

Gray shaded area is 90% Confidence Interval (CI)

42

3

Hurricane Track Forecast & Potential

Ensemble-Based Probabilities

Postage Stamp Plots

13: avn*

11: ngps*

12: cmcg*

10: tcwb*

9: ukmo*

8: eta*

Verification

1: cent

7: avn

5: ngps

6: cmcg

4: tcwb

3: ukmo

2: eta

- Reveals high uncertainty in storm track and intensity- Indicates low probability of Puget Sound wind event

SLP and winds

A Number of Nations Are Experimenting with Higher-

Resolution Ensembles

European MOGREPS

– 24 km resolution – Uses ETKF for diversity

breeding)– Stochastic physics

NCEP Short-Range Ensembles (SREF)

• Resolution of 32 km• Out to 87 h twice a day (09 and 21 UTC

initialization)• Uses both initial condition uncertainty

(breeding) and physics uncertainty.• Uses the Eta and Regional Spectral Models

and recently the WRF model (21 total members)

SREF Current System

Model Res (km) Levels Members Cloud Physics ConvectionRSM-SAS 45 28 Ctl,n,p GFS physics Simple Arak-SchubertRSM-RAS 45 28 n,p GFS physics Relaxed Arak-Schubert

Eta-BMJ 32 60 Ctl,n,p Op Ferrier Betts-Miller-JanjicEta-SAT 32 60 n,p Op Ferrier BMJ-moist prof

Eta-KF 32 60 Ctl,n,p Op Ferrier Kain-FritschEta-KFD 32 60 n,p Op Ferrier Kain-Fritsch

with enhanced detrainment

PLUS

* NMM-WRF control and 1 pert. Pair* ARW-WRF control and 1 pert. pair

The UW Ensemble System

• Perhaps the highest resolution operational ensemble systems are running at the University of Washington

• UWME: 8 members at 36 and 12-km

• UW EnKF system: 60 members at 36 and 4-km

Calibration (Post-Processing) of Ensembles Is Essential

Calibration of Mesoscale Ensemble Systems: The Problem• The component models of virtually all ensemble

systems have systematic bias that substantially degrade the resulting probabilistic forecasts.

• Since different models or runs have different systematic bias, this produces forecast variance that DOES NOT represent true forecast uncertainty.

• Systematic bias reduces sharpness and degrades reliability.

• Also, most ensemble systems produce forecasts that are underdispersive. Not enough variability!

Example of Bias Correction for UW Ensemble System

Ave

rage

RM

SE

(C

)an

d

(sh

aded

) A

vera

ge B

ias

Uncorrected + T2

12 h

24 h 36

h48

h

Ave

rage

RM

SE

(C

)an

d

(sh

aded

) A

vera

ge B

ias

Bias-Corrected T2

12 h

24 h 36

h48

h

*UW Basic Ensemble with bias correction

UW Basic Ensemble, no bias correction

*UW Enhanced Ensemble with bias cor.

UW Enhanced Ensemble without bias cor

Skill forProbability of T2 < 0°C

BSS: Brier Skill Score

The Next Step: Bayesian Model Averaging

• Although bias correction is useful it is possible to do more.– Optimize the variance of the forecast

distributions – Weight the various ensemble members using

their previous performance.– An effective way to do this is through Bayesian

Model Averaging (BMA).

Bayesian Model Averaging

• Assumes a gaussian (or other) PDF for each ensemble member.

• Assumes the variance of each member is the same (in current version).

• Includes a simple bias correction for each member.

• Weights each member by its performance during a training period (we are using 25 days)

• Adds the pdfs from each member to get a total pdf.

Application of BMA-Max 2-m Temperature(all stations in 12 km domain)

Being Able to Create Reliable and Sharp Probabilistic

Information is Only Half the Problem!

Even more difficult will be communication and getting

people and industries to use it.

Deterministic Nature?

• People seem to prefer deterministic products: “tell me what is going to happen”

• People complain they find probabilistic information confusing. Many don’t understand POP (probability of precipitation).

• Media and internet not moving forward very quickly on this.

National Weather Service Icons are not effective in communicating probabilities

And a “slight” chance of freezing drizzle reminds one of a trip to

Antarctica

Commercial sector

is no better (Weather.Com)

A great deal of research and development is required to

develop effective approaches for communicating probabilistic

forecasts which will not overwhelm people and allow them to get value out of them.