diagnostic evaluation of mesoscale models chris davis, barbara brown, randy bullock and daran rife...

Download Diagnostic Evaluation of Mesoscale Models Chris Davis, Barbara Brown, Randy Bullock and Daran Rife NCAR Boulder, Colorado, USA

If you can't read please download the document

Upload: meryl-gilbert

Post on 06-Jan-2018

218 views

Category:

Documents


2 download

DESCRIPTION

Comparison of Rainfall Forecasts Hourly rainfall in hundredths of inches Stage 2 gauge+radar NCEP WRF  x=4.5 kmNCAR WRF  x=4.0 km

TRANSCRIPT

Diagnostic Evaluation of Mesoscale Models Chris Davis, Barbara Brown, Randy Bullock and Daran Rife NCAR Boulder, Colorado, USA What is Diagnostic Verification? (Quantification of model error and understanding the sources) Cleverly focused applications of standard statistics Subjective impressions from many forecasts Event or feature-based evaluation Reruns of models with changes based on hypotheses derived from one of the above Process involves multiple variables, disparate observations and, in general, is iterative and only sometimes conclusive. The more informative the verification results, the more efficient the subsequent work will be. Comparison of Rainfall Forecasts Hourly rainfall in hundredths of inches Stage 2 gauge+radar NCEP WRF x=4.5 kmNCAR WRF x=4.0 km Event, Object, Entity, or Feature-based Evaluation Time series Spatial structure Time and space Distributions of event attributes without regard to matching (the models climate) Scores and distributions of attributes of matched objects CSI = 0 for first 4; CSI > 0 for the 5th OF OF OFO F F O Consider forecasts and observations of some dichotomous field on a grid: Critical Success Index CSI=YY/(YY+NY+YN) Equitable Threat Score ETS=(YY- )/(YY+NY+YN- ), where =success due to chance YYYN NY Fcst ObsMatch NN Traditional Measures-Based Approach Non-diagnostic and utra-sensitive to small errors in simulation of localized phenomena! Study Domain: United States, Rocky Mountains (west) to Appalachian Mountains (east) Purpose: Evaluate 2 cores of the Weather Research and Forecasting (WRF) model using object-based verification methods Advanced Research WRF (ARW), 4-km grid spacing Nonhydrostatic Mesoscale Model (NMM), 4.5-km grid spacing Time Period: 18 April 4 June, 2005 30-h forecasts initialized at 00 UTC from Eta initial condition Data: Hourly accumulated precipitation from NCEP Stage IV on 4-km grid Method: MODE object identification and attribute definition Examine statistics of unmatched objects Perform merging and matching: compare stats of matched objects Kain, J. S., S. J. Weiss, M. E. Baldwin, G. W. Carbin, D. Bright, J. J. Levit, and J. A. Hart, 2005: Evaluating high-resolution configurations of the WRF model that are used to forecast severe convective weather: The 2005 SPC/NSSL Spring Experiment. 17th Conference on Numerical Weather Prediction. American Meteorological Society, Paper 2A.5 Data, Models and Method Objects in Time Time Series of east-west 10-m wind component MM5, x=3.3 km, WSMR Amplitude, duration and timing Distribution of temporal changes Matching objects in time Diurnal Timing of Wind Changes Zonal Wind time Zonal Wind time SunriseSunset Intensity (percentile value) Area (# grid points > T) Centroid Axis angle (rel. to E-W) Aspect ratio (W/L) Fractional Area (A/(WL)) W L Raw Forecast Steps: Convolution (disk of radius 5 grid points) Thresholding: Rainfall > T (1.25 mm/h) Compute geometric attributes Restore precip values inside object, examine distribution (box and whisker plot) Spatial Objects and Their Attributes Time-Space Diagrams Object Distributions 22-km EH July-Aug. 2001 Merging and Matching Merging of objects in forecast and observed fields (done separately for each) Based entirely on separation of object centroids (Less than min(400 km, W 1 + W 2 ) Area, length and width of merged areas = sum of objects merged Position = weighted average of objects merged (weighting by area): Matching of forecast and observed objects Similar criteria for merging, except threshold is min(200 km, W 1 + W 2 ) Attributes: Fractional Area (top panel): Fraction of the minimum bounding rectangle that an object occupies Aspect ratio (bottom panel): W/L Abscissa: object size = square root of object area, expressed as number of grid cells and as kilometers. Objects too narrow Insufficient stratiform precip? Response to frontal forcing? Results from SPC Data, April-June 2005 A Closer Look More Examples Error Distributions Both models produce areas that are too large NMM has more large errors 22-km vs. 4-km Smaller bias for light rain at 4 km Opposite bias for heavy rain 4-km 22-km R f /R o - 1 Objects in Three Dimensions Lon Lat Time (x,y,t) Lon Lat Time 2-D Slices of 3-D Objects Y=300 (1200 km) Y=200 (800 km) Probabilistic Forecasts Conclusions Models make rain areas too narrow; lack of stratiform rain? Significant positive bias in size of rain areas in both models, larger for NMM Too much heavy rain. Rainfall distributions too broad. CSI for matching lowest in the afternoon, slightly higher for ARW. Not enough moderate (stratiform) rainfall Object definitions generalizable to 3-D. Timing and propagation errors can be assessed Fewer objects to compare