Are polar bears endangered? And can this be addressed using decision analysis?

From the Judgment and Decision Making list, I saw this interesting article by Scott Armstrong:

I [Armstrong], along with Kesten Green and Willie Soon, audited the forecasting methods used by the authors of the government’s administrative reports to support their strategy to list polar bears as an endangered species. As it turns out, the forecasts were based primarily judgmental methods. We concluded that the forecasts of polar bear populations were not derived from scientific forecasting procedures. It would be irresponsible to classify polar bears as endangered on the basis of such forecasts.

Bob Clemen replied with some more general questions about how to evaluate forecasting methods:

Scott, after your paper about global warming went around (with the associated offer to bet with Al Gore), I [Clemen] went and read the paper and reflected on what I knew about the IPCC. You are correct in general, as far as I can tell: These things are rarely forecasted in a way that uses the principles of forecasting that the IIF (and you especially) have worked so hard to develop. I do not want to take issue with the principles; I think the development and promulgation of those ideas is a huge contribution toward better forecasting.

The polar bear paper made me think further. I’d like to make three points:

1) Why pick on global warming or polar bears? Isn’t it the case that many forecasts, especially those based on complex physical or natural models, are made in a way that would be “unscientific” according to the principles? I suspect that most of our big public-policy decisions are based on unscientific forecasts.

Your papers may get some attention for scientific forecasting, but I wonder if a more productive approach would be to work on specific issues, trying to improve the forecasting methods used in those particular arenas. I really worry that your challenges are liable to alienate precisely those scientists whom you want to do a better job. Why not help them instead of accuse them?

2) A statement on page 4 of the polar bear paper does raise a question. The statement is, “Some reviewers of our research have suggested that the principles do not apply to the physical sciences.” I do not want to claim that this is true, but instead to flip it on its head: To what
extent have the forecasting principles themselves been developed using studies of long-range forecasts (typically judgmental) that

a) Are based on complex natural models such as climate, air quality, ground water transport, or pharmacokinetic models?
b) Have tried to forecast as far into the future as 30, 50, or 100 years (or more) under conditions of recent radical change in a critical element of the system (like increased CO2 concentration in the atmosphere)?

You may be able to answer these questions, and I would be very interested in the answer. To the extent that the forecasting principles were not based on such studies, then it may be a tough sell to use the principles to argue that climate change and similar forecasts are not valid. Kinda like extrapolating beyond the range of the data.

3) Points 1 and 2 aside, I will be the first (well, maybe second after you) to say that there are both better and worse ways to make expert-based judgmental forecasts. . . . here is a paper by a few folks who used expert climatologists back in the early 1990s to come up with long-range
probabilistic judgments of climate change (sorry, not global climate change): DeWispelare, A., Herren, L., & Clemen, R. T. (1995). The use of probability elicitation in the high-level nuclear waste regulation program. /International Journal of Forecasting 11/, 5-24.

My thoughts:

1. As Arrnstrong et al. imply in their article, the polar bears here represent the larger issue of government regulation on environmental issues. This is important when considering as a decision-analytic problem because some of the strategies recommended to save the polar bears would also be intended to mitigate other environmental consequences. Thus, I expect that an analysis looking at polar bears in isolation will underestimate the benefits of action here.

2. Amstrong et al. question the use of scientific consensus as a method for making environmental policy decisions. I’m not really sure what to do here: even if, as they note, forecasters have sometimes been wrong in the past, can we really do better than the consensus? The scientific consensus, with its peer review, would seem to me to be one of the best examples of the so-called wisdom of crowds. On the other hand, policy is everybody’s business, and if you disagree with the consensus you should feel free to say so. There’s some idea that people could agree on the science and disagree on the policy–thus focusing attention on the value function. But in practice it seems hard to do this: once people disagree on the policy, they go back and fight about the facts. (Consider weapons of mass destruction, IQ and ethnicity, Alger Hiss, the Swift Boat veterans, or even silly things like Noah’s ark.)

4 thoughts on “Are polar bears endangered? And can this be addressed using decision analysis?

  1. Some of the "principles" of the Forecasting Principles Project are very strange, at least if they are intended to be interpreted as requirements for a decision analysis. For example, one principle is "Ensure that information is reliable and that measurement error is low." Do they really mean to suggest that one should not perform a decision analysis if measurement error is high or if information is unreliable? Obviously it will still be necessary to make a decision in many instances where this "principle" is violated. Perhaps the people who came up with these principles would argue that Yes, a decision still has to be made, but decision analysis is not helpful in such cases. If this is what they would argue, then I would agree that this is sometimes true, but surely not so often as to be a "principle." Even if the only data you have are highly uncertain, you still have to make a decision somehow!

    Another oddity in the "principles" is "Do not use “fit” to develop the model." I am not sure what they mean by this; or rather, if this means what it seems to mean then I strongly disagree with it, but perhaps they are simply imprecise in what they are saying. If a model is contradicted by data, to such an extent that substantive conclusions based on the model are suspect, then the model must be rejected or changed. I would say it is wrong NOT to use "fit" to develop the model!

    The "principle" of "Be conservative in situations of high uncertainty or instability" also stands out. What does "conservative" mean in the context of the polar bears: is it "conservative" to list them as endangered (after all, isn't it conservative to "be on the safe side"?) or to leave them off the list (because it's "conservative" to not change anything unless we're sure of a basis for change). I just don't think this principle means anything.

    I could go on. I think many of these decision analysis "principles" are bogus. This does not mean Armstrong et al. are wrong about the paper of Amstrup et al., or the others that they consider — perhaps they are right that these articles are seriously deficient. But I certainly disagree with the criteria they are applying in order to make that judgment.

    Also, a fairly minor point, but: Armstrong et al. say that Amstrup et al. claim that "the designation of polar bears as an endangered species will solve the problem and will not have serious detrimental effects; and (7) there are no other policies that would produce better outcomes than those based on an endangered species classification." I cannot find any such claims in Amstrup et al. (2007).

  2. I think Armstrong raises a number of very important issues with his work. To suggest that he should issue a challenge in a less confontational manner ignores the reality that those forwarding their point of view have strayed far from a scientific approach to managing the issue.

    Armstrong's challenge raises a very material issue in these policy debates – the proper use of statitical methods and reasoning. Many scientists have not consulted sufficiently with statisticians to ensure statistical validity. I fear that many scientists are pushing too hard for teh glory and are becoming more like MBAs.

  3. I think you are right that it is INCREDIBLY common that people fight about the facts (in inherently tendentious ways) when they differ on the values. This is a social function, a human nature thing, and occurs with left and right.

Comments are closed.