Show simple item record

dc.contributor.authorSiegert, Stefan
dc.contributor.authorFerro, Christopher A.T.
dc.contributor.authorStephenson, David B.
dc.contributor.authorLeutbecher, Martin
dc.date.accessioned2015-07-06T15:17:11Z
dc.date.issued2019-02-20
dc.description.abstractThis study considers the application of the Ignorance Score (also known as the Logarithmic Score) for ensemble verification. In particular, we consider the case where an ensemble forecast is transformed to a Normal forecast distribution, and this distribution is evaluated by the Ignorance Score. It is shown that the Ignorance score systematically depends on the ensemble size, such that larger ensembles yield better expected scores. An ensemble‐adjusted Ignorance score is proposed, which extrapolates the score of an ‐member ensemble to the score that the ensemble would achieve if it had fewer or more than members. Using the ensemble‐adjustment, a fair version of the Ignorance score is derived, which is optimised if ensembles are statistically consistent with the observations. The benefit of the ensemble‐adjustment is illustrated by comparing Ignorance scores of ensembles of different sizes in a seasonal climate forecasting context and a medium‐range weather forecasting context. An ensemble‐adjusted score can be used for a fair comparison between ensembles of different sizes, and to accurately estimate the expected score of a large operational ensemble by running a much smaller hindcast ensemble. Ignorance score assigns better expected scores to simple climatological ensembles or biased ensembles that have many members, than to physical dynamical and unbiased ensembles with fewer members. By contrast, the new bias-corrected Ignorance score ranks the physical dynamical and unbiased ensembles better than the climatological and biased ones, independent of ensemble size. It is shown that the unbiased estimator has smaller estimator variance and error than the standard estimator, and that it is a fair verification score, which is optimized if the ensemble members are statistically consistent with the observations. The finite ensemble bias of ensemble verification scores is discussed more broadly. It is argued that a bias-correction is appropriate when forecast systems with different ensemble sizes are compared, and when an evaluation of the underlying distribution of the ensemble is of interest; possible applications to unbiased parameter estimation are discussed.en_GB
dc.identifier.citationPublished online 20 February 2019.
dc.identifier.doi10.1002/qj.3447
dc.identifier.urihttp://hdl.handle.net/10871/17806
dc.language.isoenen_GB
dc.publisherWiley / Royal Meteorological Societyen_GB
dc.rights.embargoreasonUnder embargo until 20 February 2020 in compliance with publisher policy.en_GB
dc.rights© 2018 Royal Meteorological Society.
dc.titleThe ensemble‐adjusted Ignorance score for forecasts issued as Normal distributionsen_GB
dc.typeArticleen_GB
dc.identifier.issn0035-9009
dc.descriptionThis is the author accepted manuscript. The final version is available from Wiley / The Royal Meteorological Society via the DOI in this record.
dc.identifier.eissn1477-870X
dc.identifier.journalQuarterly Journal of the Royal Meteorological Societyen_GB


Files in this item

This item appears in the following Collection(s)

Show simple item record