A verification framework for interannual-to-decadal predictions experiments
© The Author(s) 2012. Open access article. This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
Decadal predictions have a high profile in the climate science community and beyond, yet very little is known about their skill. Nor is there any agreed protocol for estimating their skill. This paper proposes a sound and coordinated framework for verification of decadal hindcast experiments. The framework is illustrated for decadal hindcasts tailored to meet the requirements and specifications of CMIP5 (Coupled Model Intercomparison Project phase 5). The chosen metrics address key questions about the information content in initialized decadal hindcasts. These questions are: (1) Do the initial conditions in the hindcasts lead to more accurate predictions of the climate, compared to un-initialized climate change projections? and (2) Is the prediction model’s ensemble spread an appropriate representation of forecast uncertainty on average? The first question is addressed through deterministic metrics that compare the initialized and uninitialized hindcasts. The second question is addressed through a probabilistic metric applied to the initialized hindcasts and comparing different ways to ascribe forecast uncertainty. Verification is advocated at smoothed regional scales that can illuminate broad areas of predictability, as well as at the grid scale, since many users of the decadal prediction experiments who feed the climate data into applications or decision models will use the data at grid scale, or downscale it to even higher resolution. An overall statement on skill of CMIP5 decadal hindcasts is not the aim of this paper. The results presented are only illustrative of the framework, which would enable such studies. However, broad conclusions that are beginning to emerge from the CMIP5 results include (1) Most predictability at the interannual-to-decadal scale, relative to climatological averages, comes from external forcing, particularly for temperature; (2) though moderate, additional skill is added by the initial conditions over what is imparted by external forcing alone; however, the impact of initialization may result in overall worse predictions in some regions than provided by uninitialized climate change projections; (3) limited hindcast records and the dearth of climate-quality observational data impede our ability to quantify expected skill as well as model biases; and (4) as is common to seasonal-to-interannual model predictions, the spread of the ensemble members is not necessarily a good representation of forecast uncertainty. The authors recommend that this framework be adopted to serve as a starting point to compare prediction quality across prediction systems. The framework can provide a baseline against which future improvements can be quantified. The framework also provides guidance on the use of these model predictions, which differ in fundamental ways from the climate change projections that much of the community has become familiar with, including adjustment of mean and conditional biases, and consideration of how to best approach forecast uncertainty.
The authors of this paper are members of the Decadal Predictability Working Group sponsored by US CLIVAR. We thank the two reviewers of this paper, who offered considerable constructive suggestions toward the improvement of this work. The Working Group appreciates the support from the US CLIVAR office. Goddard, Gonzalez and Greene received funding from a NOAA grant (NA08OAR4320912) for work on this project. Amy Solomon received support form NOAA OAR CVP and NSF AGS #1125561. Doug Smith was supported by the Joint DECC/Defra Met Office Hadley Centre Climate Programme (GA01101) and the EU ENSEMBLES project. Deser and Meehl were supported by the Office of Science (BER), US Department of Energy, Cooperative Agreement No. DE-FC02-97ER62402, and the National Science Foundation that sponsors the National Center for Atmospheric Research. Matthew Newman was supported by the NOAA OAR CVP program. Sutton and Hawkins’ contributions were covered by support form support from the UK National Centre for Atmospheric Science. Fricker, Ferro, and Stevenson contributions were supported by byNERC Directed Grant NE/H003509/1. Hegerl’s contribution was funded by EQUIP, NERC NE/H003533/1. Kushir’s contribution to this work was funded by NOAA Grant NA0OAR4320912.
This is the final version of the article. Available from Springer Verlag via the DOI in this record.
Vol. 40 (1), pp. 245 - 272