Bob Shackleton writes,
I recently ran into a topic that your readers may find useful. The Intergovernmental Panel on Climate Change is releasing its huge Fourth Assessment Report, of which the Working Group 1 volume covers the science. You can find the document here.
The interesting topic arises mainly in Chapter 10 on Global Climate Projections and concerns Bayesian estimation of the ‘climate sensitivity’ – the equilibrium response of the global average temperature to a doubling of atmospheric CO2 concentrations. Nestled throughout the chapter but most succinctly on pages 798-799, you’ll find a discussion, with references, of a large number of attempts to narrow down the pdf for the climate sensitivity. One reference, Annan and Hargreaves, calls attention to the importance of the assumed prior and the utility of independent data sets in determining the range. Annan’s website provides even more information
In brief, Annan and Hargreaves point out that the use of an ‘ignorant prior’ and the failure to make use of independent data sets has resulted in the publication of much wider pdfs for the sensitivity than actually makes sense. Their briefest discussion is here.
The problem has serious science and policy implications because the very wide pdfs are being used to argue for extremely stringent emissions reductions – the idea being that even a very modest increase in concentrations has a small chance of causing very significant warming. The problem is quite bad enough even if the climate sensitivity is restricted to a 90% confidence interval of 1.5*C to 4.5*C, as is probably reasonable; if the 90% range is 1.x* to 6* or 7* we face extraordinary risks even from emissions that have already occurred. I think Annan and Hargreaves are on the right track and their arguments will win the day, but at the moment it’s still a brewing controversy.
My comments: this is an interesting topic that I know nothing about! Looking briefly at the Annan and Hargeraves paper, they seem to focus on the prior distribution, but I suppose the key issue is the likelihood (which they also discuss briefly).
Beyond this, we all know that intervals are typically too narrow in practice, so it’s interesting to see an example where someone is claiming the intervals are too wide. (Shackelton in his note refers to “confidence intervals” which is ok–I use the term myself–but, strictly speaking using classical terminology, these would be called “predictive intervals” since the properties of these intervals would be evaluated unconditional on the (eventual) true values.