dc.contributor.author | Siegert, Stefan | |
dc.contributor.author | Ferro, Christopher A.T. | |
dc.contributor.author | Stephenson, David B. | |
dc.contributor.author | Leutbecher, Martin | |
dc.date.accessioned | 2015-07-06T15:17:11Z | |
dc.date.issued | 2019-02-20 | |
dc.description.abstract | This study considers the application of the Ignorance Score (also known as the Logarithmic Score) for ensemble verification. In particular, we consider the case where an ensemble forecast is transformed to a Normal forecast distribution, and this distribution is evaluated by the Ignorance Score. It is shown that the Ignorance score systematically depends on the ensemble size, such that larger ensembles yield better expected scores. An ensemble‐adjusted Ignorance score is proposed, which extrapolates the score of an ‐member ensemble to the score that the ensemble would achieve if it had fewer or more than members. Using the ensemble‐adjustment, a fair version of the Ignorance score is derived, which is optimised if ensembles are statistically consistent with the observations. The benefit of the ensemble‐adjustment is illustrated by comparing Ignorance scores of ensembles of different sizes in a seasonal climate forecasting context and a medium‐range weather forecasting context. An ensemble‐adjusted score can be used for a fair comparison between ensembles of different sizes, and to accurately estimate the expected score of a large operational ensemble by running a much smaller hindcast ensemble.
Ignorance score assigns better expected scores to simple climatological ensembles or biased ensembles that have many members, than to physical dynamical and unbiased ensembles with fewer members. By contrast, the new bias-corrected Ignorance score ranks the physical dynamical and unbiased ensembles better than the climatological and biased ones, independent of ensemble size. It is shown that the unbiased estimator has smaller estimator variance and error than the standard estimator, and that it is a fair verification score, which is optimized if the ensemble members are statistically consistent with the observations. The finite ensemble bias of ensemble verification scores is discussed more broadly. It is argued that a bias-correction is appropriate when forecast systems with different ensemble sizes are compared, and when an evaluation of the underlying distribution of the ensemble is of interest; possible applications to unbiased parameter estimation are discussed. | en_GB |
dc.identifier.citation | Published online 20 February 2019. | |
dc.identifier.doi | 10.1002/qj.3447 | |
dc.identifier.uri | http://hdl.handle.net/10871/17806 | |
dc.language.iso | en | en_GB |
dc.publisher | Wiley / Royal Meteorological Society | en_GB |
dc.rights.embargoreason | Under embargo until 20 February 2020 in compliance with publisher policy. | en_GB |
dc.rights | © 2018 Royal Meteorological Society. | |
dc.title | The ensemble‐adjusted Ignorance score for forecasts issued as Normal distributions | en_GB |
dc.type | Article | en_GB |
dc.identifier.issn | 0035-9009 | |
dc.description | This is the author accepted manuscript. The final version is available from Wiley / The Royal Meteorological Society via the DOI in this record. | |
dc.identifier.eissn | 1477-870X | |
dc.identifier.journal | Quarterly Journal of the Royal Meteorological Society | en_GB |