forecast verification - ecmwf confluence wiki

13
© ECMWF February 12, 2016 Forecast verification .. a few comments Anna Ghelli [email protected]

Upload: others

Post on 05-Dec-2021

23 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Forecast verification - ECMWF Confluence Wiki

© ECMWF February 12, 2016

Forecast verification ….. a few comments

Anna Ghelli

[email protected]

Page 2: Forecast verification - ECMWF Confluence Wiki

October 29, 2014

Forecast quality versus forecast value

A forecast has high QUALITY if it predicts the observed conditions well according to some objective or subjective criteria.

A forecast has VALUE if it helps the user to make a better decision.

Quality but no value

Value but no quality

Page 3: Forecast verification - ECMWF Confluence Wiki

October 29, 2014

Verification goals and processWhat are our goals with forecast evaluation?

Evaluate usefulness of forecastsIn general?For specific users?

Improve ensemble and modeling systemTrack changes in forecast performance over time

PROCESSStart by determiningWhat are the questions we want to answer??

Page 4: Forecast verification - ECMWF Confluence Wiki

October 29, 2014

Impact forecasts

Emergency management Roads

Energy

Tourism Air

travel

Agriculture

Sports

Floods Courtesy Beth Ebert

Page 5: Forecast verification - ECMWF Confluence Wiki

October 29, 2014

User-relevant verification - Aviation

Flight time error (FTE) = flight_timeobs – flight_timefcst

Accurate measure of wind forecast accuracy directly relevant to airlinesCalculated using the track that the aircraft actually took Uses AMDAR observations from real flights rather than model analyses or radiosondes

AMDAR – 1-7 Feb 2010 Courtesy Phil Gill

Page 6: Forecast verification - ECMWF Confluence Wiki

October 29, 2014

Road surface friction = slipperiness Forecast (green) vs. Observed (magenta)

FMI

Courtesy FMI

Page 7: Forecast verification - ECMWF Confluence Wiki

October 29, 2014

Uncertainty in observations

As models improve, can no longer ignore observation error! Remove observation bias errors where possible Effects of random obs error on verification

“Noise” leads to poorer scores for deterministic forecasts Ensemble forecasts have poorer reliability & ROC

What can we do? Error bars in scatter plots Quantitative reference to “gold standard”

Correct for systematic error in observations RMSE – Ciach & Krajewski (Adv. Water Res.,1999) Categorical scores – Briggs et al. (MWR, 2005), Bowler (MWR, 2006)

Multiple observation sources

Courtesy Beth Ebert

Page 8: Forecast verification - ECMWF Confluence Wiki

October 29, 2014

Model performance: HRES relative to ERA-I

8 EUROPEAN CENTRE FOR MEDIUM-RANGE WEATHER FORECASTS

HRES

ERA-Interim

Forecast range at which the anomaly correlation drops below 80%

N. Hemisphere extra-tropics Difference

Page 9: Forecast verification - ECMWF Confluence Wiki

October 29, 2014

EXTRA

Page 10: Forecast verification - ECMWF Confluence Wiki

October 29, 2014

Measure Attribute evaluated Comments Probability forecasts

Brier score Accuracy Based on squared error

Resolution Resolution (resolving different categories) Compares forecast category climatologies to overall climatology

Reliability Calibration

Skill score Skill Skill involves comparison of forecasts

Sharpness measure Sharpness Only considers distribution of forecasts

ROC Discrimination Ignores calibration C/L Value Value Ignores calibration

Ensemble distribution Rank histogram Calibration Can be misleading

Spread-skill Calibration Difficult to achieve

CRPS Accuracy Squared difference between forecast and observed distributions Analogous to MAE in limit

IGN score Accuracy Local score, rewards for correct category; infinite if observed category has 0 density

Page 11: Forecast verification - ECMWF Confluence Wiki

October 29, 2014

Tradi'onalspa'alverifica'on

Observed

yes no

yes hits false alarms

no misses correct negatives

Pre

dict

ed

misseshitshitsPOD+

=

alarmsfalsehitsalarmsfalseFAR

+=Forecast Observed

false alarms

hits

misses

correct negatives

( )∑=

−=N

iii OF

NRMSE

1

21alarmsfalsemisseshits

hitsTS ++

=

Page 12: Forecast verification - ECMWF Confluence Wiki

October 29, 2014

Verifyingrareextremevalues

Categorical scoresMetrics should reward hits, penalise misses and false alarmsFor rare events, traditional categorical scores like TS à 0New extremal dependence scores:

HpHpEDS

loglogloglog

+

−=

HpHqSEDS

loglogloglog

+

−=

HFHFEDI

loglogloglog

+

−=

)1log()1log(loglog)1log()1log(loglog

HFHFHFHFSEDI

−+−++

−+−−−=

Ferro & Stephenson, Weather & Forecasting, 2011

Page 13: Forecast verification - ECMWF Confluence Wiki

October 29, 2014

Scores

Root Mean Square Error:

Bias:

Mean Absolute Error :

Anomaly Correlation:

E = fc− an( )2

OBSFCBIAS −=

OBSFCMAE −=

Measures accuracy Range: 0 to infinity perfect score = 0

Measures bias Range: -infinity to +infinity perfect score = 0

Measures accuracy Range: 0 to infinity perfect score = 0

Measures accuracy Range: -100% to 100% perfect score = 100%

( )( )

( )

( )2

2

canA

cfcA

AAcancfcACC

an

fc

anfc

−=

−=

−−=