Updates of bad forecasts: Let’s follow them up and see what happened!

People make bad forecasts, then they move on. Do the forecasts ever get fixed? Do experts learn from their mistakes?

Let’s look at three examples.

1. The economist who kept thinking that the Soviets were catching up

Paul Samuelson:

samuelson.png

Yes, the above graph was from 1961, but “in subsequent editions Samuelson presented the same analysis again and again except the overtaking time was always pushed further into the future so by 1980 the dates were 2002 to 2012. In subsequent editions, Samuelson provided no acknowledgment of his past failure to predict and little commentary beyond remarks about ‘bad weather’ in the Soviet Union.” As late as 1989, the celebrated economist wrote, “The Soviet economy is proof that, contrary to what many skeptics had earlier believed, a socialist command economy can function and even thrive.”

This is not a left/right thing. Those of you who followed politics in the 1970s and the 1980s know that the supposed economic might of the Soviet Union was pushed by those on the left (who wanted us to imitate that system) and by those on the right (who feared we’d be overcome by it).

Anyway, for the purpose of today’s discussion, the point is that for twenty-eight years Samuelson neither corrected his error nor learned from it.

2. The contrarians who like to talk about global cooling

Steven Levitt and Stephen Dubner:

A Headline That Will Make Global-Warming Activists Apoplectic . . . It is curious that the global-warming arena is so rife with shrillness and ridicule. Where does this shrillness come from?

Yes, it was back in 2009 that the celebrated freakonomists promoted a claim that “The PDO cool mode has replaced the warm mode in the Pacific Ocean, virtually assuring us of about 30 years of global cooling”—but I don’t know that they’ve ever said a Whoops on that one, despite repeated news items such as the graph shown above.

Did Levitt and Dubner learn from this not to reflexively trust contrarian takes on important scientific issues? I don’t know. I did some searching and found this interview from 2015 where Levitt said,

I tell you what we were guilty of . . . We made fun of the environmentalists for getting upset about some other problem that turned out not to be true. But we didn’t do it with enough reverence, or enough shame and guilt. And I think we pointed out that it’s completely totally and actually much more religion than science. I mean what are you going to do about that? I think that’s just a fact.

So, as of 2015, they still seemed to have missed the point that you can learn from your mistakes.

3. Repeatedly biased forecasts from the U.S. Department of Transportation

VMT-C-P-chart-big1-541x550

The above graph was made in 2014.

So here’s my question: Has the Department of Transportation cleaned up their act? How are those projections going? I think the key problem is not the bad forecast, it’s the continuing to make the bad forecast even after it’s been destroyed by data.

We all make mistakes. The only way to avoid making mistakes is to do nothing. But we have the responsibility to learn from our mistakes. Science may be self-correcting, but it doesn’t self-correct on its own.

85 thoughts on “Updates of bad forecasts: Let’s follow them up and see what happened!

  1. I love the fact that in the global warming thing they cite PDO. Don’t look too closely at the PDO. It is essentially noise (there were several papers on this when it first came out that were ignored). It is what happens when you ignore autocorrelation and dependence in data – the mining noise phenomena. When corrected for the dependencies , the PDO basically predicts almost nothing. I don’t expect anyone the believe things I have done, but Michael Mann had a paper a few months ago also showing that it is noise and adds nothing to weather predictions – see https://www.nature.com/articles/s41467-019-13823-w. One way to view the problem with the PDO and related indices is that they claim to extract the dominant dynamics. But say I wanted to get a quick look at the dynamic of a series, I could look at the spectrum (or spectral matrices) of the data. Now suppose I told you I was going to estimate that only using the lag zero covariance matrix. I don’t think a paper claiming that would get very far. But that is essentially what they are doing. Or go back and calculate the anomalies yourself and then see how well the PDO predicts the anomalies, not just in terms of correlations (the Anscombe problem) but graph them to see how many important features are missed.

    It is the cascade effect. Everyone just repeats the same thing without ever going back and looking at the thing for themselves.

  2. This is a blog focused on statistics. At some point, if it is going to be mentioned at all, wouldn’t it make sense to look at the actual, um, statistics of global warming? Rather than just drive-by claims?

    As an example, Wegman has been a whipping boy here for years now because of plagiarism by his grad student (the plagiarism claim against Wegman himself was dubious at best). Has there been a single mention of the statistical claims he made?

    If you feel you need to protect your career, that’s one thing, but then you should just stay away from it, The Topic that Shall Not be Mentioned.

        • Amazing technology, but I think they’re overestimating GDP growth for the next 50K years, which leads to too-optimistic projections.

        • From the carbonbrief article:

          “Avoiding the ice age was just down to luck, says Ganopolski:

          “What we show in our paper is that we escaped the glacial inception naturally, thanks to a proper combination of Earth’s orbital parameters and natural CO2 concentration.”

          So although humans aren’t behind this icy near-miss, it seems we are having a substantial impact on when the next ice age does finally appear.”

          Dr. Nielson-Gammon,
          With the methodology used in this paper, model parameter tuning based upon paleoclimate hindcasting, are you comfortable with these statements of fact? Are you comfortable that this methodology is adequate to predict 100,000 years of future climate?

        • “Are you comfortable that this methodology is adequate to predict 100,000 years of future climate?”

          Good question. But another question: Is Dr. Nielson-Gammon likely to read read your question?

        • Another squirrel! One that won’t even be born for another several tens of thousands of years.

          The Freakonomics folks were dead wrong, and no amount of deflection on your part is going to change that fact.

          Now, tell us about sunspots and the upcoming solar minimum. That seems to have replaced the PDO in denierville some time back.

        • Not deflecting, they were wrong. Predictions are hard.

          The point – which everyone recognizes but pretends they don’t – is that the predictions/forecasts/projections coming from mainstream climate science are every bit as wacky as the ones coming from the skeptics.

          Wegman wrote a report to Congress about statistical malfeasance in paleoclimate, but you knew that. The Report that Cannot be Mentioned, I guess, since Andrew keeps deflecting.

          What’s with the squirrel thing? Is that some sort of pop culture reference? I don’t watch TV.

        • “The point – which everyone recognizes but pretends they don’t – is that the predictions/forecasts/projections coming from mainstream climate science are every bit as wacky as the ones coming from the skeptics.”

          Model projections have matched temperature trends quite well over the last decades. I suspect you know that.

          “Wegman wrote a report to Congress about statistical malfeasance in paleoclimate, but you knew that.”

          “Malfeasance” is a rather strange word for the complaints contained in Wegman’s report, though some might argue that it applies to the report itself.

          And I’m sure you know that the application of standard PCA to the data doesn’t change Mann’s reconstruction in any significant way, and that there’s been a slow but steady stream of papers investigating paleoclimate through various proxies that show the same hockey stick shape.

          Regardless Wegman is paleohistory in the context of climate science denialism.

        • Given the range of possible parameters constrained by past climate and the changes to atmospheric composition already averted, I’m confident that we’ve dodged the bullet of a new ice age during the present orbital cycle. That gives us another 20,000 years at least. As for which of the next orbital opportunities will do the trick, the technological and social unknowns are sufficiently broad that there’s really no way to tell. I’m willing to confidently predict at least 40,000 years, under the principle that I will never be held accountable for forecasts that can only be proven wrong after I am dead.

        • “I’m confident that we’ve dodged the bullet of a new ice age during the present orbital cycle.”

          That rustling sound you hear is Milankovitch turning over in his grave. Wow.

          Even the most sophisticated climate models lack the ability to tightly constrain the dominant, nonlinear processes. A tiny error in water vapor feedback – just as a random example – would blow up the prediction in one hundred years. A thousand? Ten thousand? Absolutely crazy to express confidence over those intervals based upon the models being used. The sort of crazy that would draw well-deserved criticism on this blog if it were psychology.

        • >>A tiny error in water vapor feedback – just as a random example – would blow up the prediction in one hundred years.

          I don’t think this quite works this way.

          In some situations, large-scale patterns are more predictable than small-scale ones. It is kind of like how we can’t predict the weather 2 weeks in advance, but can be very confident that July 2021 will be hotter than January 2021 in New York City.

          The Milankovitch cycles will continue to do what they do unless we actually change the planet’s axial tilt or orbit. But if the “baseline” is warm enough it won’t tip over into a full-scale glaciation.

          There could be an ice age, but if there is, it’s because *we* cause one. Overcorrecting for global warming via geo-engineering… or, heck, maybe intentionally. There are a lot more people in the tropics than at high latitude. Once we have the ability to control the planet’s climate we don’t know what future generations will want to do with it.

        • “Milankovitch Rustling” would be an excellent name for a rock band.

          The most sophisticated climate models are not used for this purpose. They’re designed to simulate different stuff, on much different time scales. In those models, climate sensitivity is an emergent property. You need a model that you can tune. You also want a model that simulates large ice sheet dynamics, which GCMs do not.

          Instead, you want a lower-order model that can do a decent job simulating the basics of land ice sheet growth and crustal deformation, so that at a minimum it captures the gradual accumulation and relatively sudden melting that’s characteristic of continent-sized ice sheets. You’d then take advantage of the remarkable climate history of Earth over the past 3 million years, as the extremely gradual decline of CO2 concentrations started enabling ice sheets to temporarily sustain themselves on Northern Hemisphere continents when orbital parameters were just right.

          You wouldn’t need to tune the model’s climate sensitivity per se, because we don’t precisely know the forcing differences between glacial and interglacial periods, and it’s the product of the two that needs tuning. You’d tune the model to be on this hair’s-breadth between a mostly ice-free and a permanently glaciated North America and Europe.

          You’d confirm that the model can capture the fluctuations during glacial cycles well enough to distinguish the distinctive shapes of the temperature record of each glacial and interglacial cycle, since although we call them cycles the three different cycles are not individually purely periodic and never come into phase the same way twice. Fortunately for us, there are many more degrees of freedom in the paleoclimate record than there are tuning knobs. If your model is really good, it will capture the 40,000 year glaciation cycles that first predominated, before further gradual CO2 decline (with the glacial-interglacial fluctuations of CO2 superimposed) led to longer-lasting glaciations that tended to follow an erratic quasi-100,000 year cycle. But that’s asking a lot; scientists are only just beginning to succeed in that endeavor.

          If the model is good enough to capture many of the subtle details over the past 800,000 yearsl, then it should be more than adequate to simulate to a first approximation what happens when we change interglacial CO2 concentrations by as much as we are. I expect that even a factor of two error in the response of the model to CO2 would not alter the result that we’ve dodged the current glacial threat, and I think that the model needs to be at least four times more accurate than that to perform acceptably when simulating past glaciations.

          I haven’t read the papers recently to see if they’ve done a Bayesian-type analysis to determine whether any plausible range of parameter settings can both give a decent representation of past climate and yield the next glaciation any time soon. I’ve only given you my informed prior.

          To your and @confused’s discussion about water vapor feedback, yeah, feedback is nonlinear, but it’s not nonlinear in a dynamical error growth sense, it’s nonlinear in a mathematical sense (i.e. it involves multiplication rather than addition). If you specify your feedback error, you can pretty well estimate the sign and magnitude of your global mean temperature error in 100 year’s time. A 10% water vapor feedback error would give you, as it turns out, about a 10% temperature change error. But it’s beside the point since you don’t use that type of model for this problem.

    • Matt:

      Is there a new rule that when two people coauthor a paper that copies from wikipedia, that only the first author gets blamed?

      Oh, yeah, has there been a single mention of the statistical claims he made? Yes, there was. Follow the link. Wegman claimed that a d-dimensional cube has 2d vertices. That ain’t right. Unless d=2. In statistics we call that a very special case.

      • Wegman was a LONG time ago. I’d totally forgotten about him. And the next ice age is a LONG time in the future. None of us will live to see it.

        What an odd attempt at rebuttal by Matt.

        • “defending rulebreakers like Wegman”

          The accusation by Raymond Bradley was bogus. You should read about it. If you did, he would not be your whipping boy anymore because there were no heroes in this saga. Not by a long shot.

          “these guys don’t deserve this kind of loyalty”

          Did you read his report to Congress? I am by no means a Wegman fan. His report is taboo, and I don’t like taboos in science, that is all.

          How would you feel if every time your name was mentioned, it was as “Andrew Gelman, the guy who wrote x [where x was the most prominent and embarrassing mistake of your career]?

          But forget about Wegman. What this is really about is the power wielded by Wegman’s opponents.

        • Matt:

          1. Plagiarism is not a mistake. It’s something that people do on purpose.

          2. Wegman copied without attribution multiple times, as well as publishing things that were so wrong that I guess maybe it would’ve been better if he’d plagiarized. Copying without attribution and screwing things up were not a mistake with him; it was a pattern of behavior.

          3. I don’t know what you mean when you say his report is “taboo.” It’s a public report, right? Quote it if you’d like.

          4. If you want, feel free to refer to me as “Andrew Gelman, the person who published the false theorem.” You can link directly to the correction notice.

        • It doesn’t make sense to litigate this here on your blog, but I will just point out that if you knew all the details of the Wegman debacle, as a proponent of honesty and integrity in science, you would not be using Wegman as a good example of anything. On the plagiarism, Wegman wrote that his team “never intended to take intellectual credit for any aspect of paleoclimate reconstruction science or for any original research aspect of social network analysis.” That is obviously true from what happened, he chose not to follow the rules of attribution when drawing from textbooks for a study that was not peer-reviewed. All these issues were either in the Background section of the report, or the silly social network section.

          You should read this:

          https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2004GL021750

          It’s the perfect cautionary tale about fooling yourself with statistical processes that you do not understand. Each of Mann’s decisions seem reasonable in isolation. But put together, and the result is he totally botched it in spectacular fashion, and then doubled down and attacked his critics when it was pointed out.

          Incidentally, despite the talking points from folks like dhogaza, there are no other reconstructions showing a hockey stick without both including Mann’s data, and using dubious tactics to make that same hockey stick emerge at the end after a lot of sausagemaking.

        • Matt:

          Wegman’s a good example of the perils of copying from wikipedia, no? You copy and you can introduce errors. A good warning for people not to put their name on cut-and-paste jobs.

        • Matt –

          > What this is really about is the power wielded by Wegman’s opponents.

          Remember when conservatives were supposedly the champions for personal responsibility? Seems that now they’re reduced to all self-victimization all the time.

          It’s amusing to see you rationalize an acceptance of what he did by complaining about the behaviors of others

          It’s an obscure issue when considered in the full scale.

          It has little to do with the actual state of the science on the whole.

          It’s even totally unrelated to the relatively minor issue of the whole “global cooling” nonsense mentioned in the OP.

          It’s kind of amusing that you think climate activists have some enormous power. Have you looked up who’s president lately, and what his policies are?

          Why are you bringing up this issue as if it has some deep relevance to the current context?

        • “Why are you bringing up this issue as if it has some deep relevance to the current context?”

          Hi Joshua,
          I presume you are the same Joshua that posts on climate blogs.

          There have been numerous pokes at climate skeptics on this blog, which is fine. Plenty of wack stuff out there. Meanwhile, the claims made by AGW proponents have been just as wack in many cases. It looks to me like preying on the weak, that’s all. This is a post about bad forecasts and it should be equal opportunity re climate.

          The simple fact is that Michael Mann’s hockey stick is a far more consequential example of bad statistics than any criticized on this blog. Why do you suppose that is?

          Incidentally, I believe in benchmarking in all things. That is why my political leanings are towards European socialism. I just happen to be a climate skeptic because I have one form of experience that no climate scientist seems to have, I have seen how breathtakingly difficult it is to model systems that are simpler than the climate. There is no substitute for running your model (forward in time) and seeing it fail numerous times and fixing the problems. At this point in time, climate models have made tremendous strides and are now reasonably good at interpolation (predictions within the realm of recent Holocene climate). But predicting the onset of the next ice age is extrapolation, and the models just are not constrained enough.

        • Matt,
          The statistical errors in the original ‘hockey stick’ graph were not consequential in the sense of changing the numbers: you get pretty much the same result if you do the statistics correctly. Incidentally, I think this is fairly strong evidence for the mistakes being genuine mistakes and not intentional distortions.
          But the statistical errors were extremely consequential in the politics of climate change. Here we are, 15 years later, and people are still pointing at those numerically inconsequential errors to try to discredit the science of climate change.

        • Yah Matt. Same dude.

          > . It looks to me like preying on the weak, that’s all. This is a post about bad forecasts and it should be equal opportunity re climate.

          By what metric are you measuring strength? How do you determine who is weak and who is strong?

          I see no reason why there *shouldn’t* be equal opportunity – but neither is there some kind of magical balance fairy. You aren’t correcting an imbalance, imo. I don’t think you correct for Wegman’s errors, or the global cooling nonsense, by playing the victim or by talking about Mann. Imo, all you’re doing when you engage at that level is contributing tribal distraction. I think you don’t seer that because you’re “triggered” and caught in a loop working through the legacy of unresolved sense grievance.

          > The simple fact is that Michael Mann’s hockey stick is a far more consequential example of bad statistics than any criticized on this blog.

          I don’t think that is a simple fact. Tmthrnjickeybstick is huge in the climate-o-sphere that is a freakish world inhabited by a bunch of outliers and fanatics. In the real world the hockey stick is of little significance, as is Wegman’s plaugerism.

          > Why do you suppose that is?

          I’ll reinterpret that question to be “Why doesn’t Andrew bring up the hockey stick to critique the statistics involved?”. I don’t know. I would presume because he doesn’t think the issue is current, or maybe even that there’s a there there.

          Maybe you should ask him. But whether or not he does or doesn’t post about the hockey stick, actually has zero relevance to the global cooling nonsense. I would suggest that your perception that there’s some direct relevance (via a magical balance fairy) is a tic that served no real function.

          You’re trying to resolve a real issue (how bias works for all of us) by running down a rabbit hole. Andrew posts a lot on the interface between statistical reasoning and bias. Surely you know that yes?

        • “The simple fact is that Michael Mann’s hockey stick is a far more consequential example of bad statistics than any criticized on this blog. Why do you suppose that is?”

          We understand that you and your ilk have been claiming this for about 20 years now, yet the hockey stick stands, even when his particular twist on PCA isn’t used to analyze his proxy data.

          And there have been so many subsequent proxy-based studies that yield very similar hockey sticks that I’ve lost count. Frankly, it got boring after about the first decade of replication using different statistical techniques and different proxy data.

          So, no, “Mann’s hockey stick is a far more consequential example of bad statistics than any criticized on this blog” is simply false. It was not and is not an artifact of bad statistics.

          “But predicting the onset of the next ice age is extrapolation, and the models just are not constrained enough.”

          Now this is evidence that you are simply ignorant. Climate models used to model the near-term changes in climate have no need to incorporate Milankovich cycles, which last tens of thousands of years. They are not designed to project the climate over that timeframe. Any effort to do so would be fruitless given that we don’t know what kind of energy sources will be available to humans over the next thousands of years, what the eventual population of humans on the planet will be over that timespan (it could be zero), etc.

          That doesn’t not mean that climate models are useless for what they are designed to do, inform us about how the climate might react to changes in CO2 concentrations over the next few decades.

          Just stop.

    • Zhou:

      That won’t work as an explanation for items 1 and 2 above. Neither Samuelson, Levitt, nor Dubner were being paid by people pushing those particular forecasts. I think it’s simpler: I think they got things wrong and didn’t want to admit they’d made a mistake.

      • Andrew:

        I am with Zhou on Samuelson at least. Wasn’t he pushing a kind of way of doing macro economics that required the economy to be predictable based on a relatively small number of economic metrics. In a sense, he was paid (in reputation and income) for the idea that the economy could be forecast. It wasn’t like his forecasts of the Soviet Union were wrong because he made a math error. They were wrong because variables that weren’t being measured affected growth. To admit that would have the consequence that maybe in the US unmeasured (maybe unmeaurable) variables could also make the forecasts of his school of economic thought unreliable.

        • Steve:

          I’d guess I’d label this as ideology on Samuelson’s part rather than financial incentive. The tricky thing is that, back in the 1960s and 1970s, people on the left had ideological reasons for saying that the Soviet system was powerful (as it supported their views about economic planning), and people on the right also had ideological reasons for saying the Soviet system was powerful (as it supported their views about Cold War military actions). A consensus, of sorts, even if the two sides were coming at it for different reasons.

  3. I think #2 is a little less straight-forward than #1 and #3. #2 comes across as a criticism of other climate forecasts that turned out wrong. So you are presenting the criticism of the forecast as a forecast itself.

    Also, the climate data itself is not so straightforward from what I understand. Several series are usually combined together to make longer-term histories. Other series have had adjustments made to them. It takes an expert really.

  4. The purpose of the high DOT forecasting is to get more money in the DOT budget (same with the state DOTs). So what’s the incentive to correct it?

    (I see Zhou Fang just posted a similar sentiment.)

  5. The official answer the question about DOT. (C&P = DOT’s Status of the Nation’s Highways, Bridges, and Transit: Conditions and Performance) (VMT = vehicle miles traveled)

    The 1995 C&P Report noted that the average annual VMT growth rate was 3.5 percent from 1966 to 1993. While the 1995 report’s projected growth rate of 2.15 percent from 1993 to 2013 was a step downward, it significantly overestimated actual VMT growth over that period, which was 1.33 percent per annum. This finding is consistent with an analysis of travel forecasts for all editions, presented in the 2015 C&P Report, which suggested that States have tended to underestimate future VMT during times of rapid travel growth and tended to overestimate future VMT at times when travel growth was slowing.

    In the C&P Report editions from 1995 (the first to use HERS) through 2010, the HERS simulations relied exclusively on these HPMS forecasts to project future traffic. The disadvantages to this approach have been: (a) the ambiguity as to how the forecasts are derived, which makes it difficult to evaluate them and to judge how to incorporate them within HERS; and (b) the apparent slowness of the States to factor into their forecasts recent changes in the trend rate of national VMT growth (as discussed in the 2015 C&P Report, Chapter 9).

    In light of these concerns, C&P Report editions from 1999 onward have included simulations that used alternatives to the HPMS forecasts….

    The 2015 edition of the C&P Report further de-emphasized the national VMT growth implied by the HPMS- based forecasts by using it for sensitivity testing only, and basing the primary modeling results on an alternative forecast. In contrast with the more subjective selection of alternative forecasts in earlier editions, the 2015 edition relied on the model-based forecasts in the FHWA National Vehicle Miles Traveled Projection, which was first released in May 2014. The Volpe National Transportation Systems Center developed the supporting model, which forecasts future changes in passenger and freight VMT based on predicted changes in demographic and economic conditions…

    The HERS and NBIAS analyses presented elsewhere in this C&P Report source their national-level forecasts of VMT exclusively from the May 2017 release of the FHWA National Vehicle Miles Traveled Projection. For all vehicle types, this release forecasts that VMT growth will average 1.07 percent annually over 20 years starting in 2015.

    https://www.fhwa.dot.gov/policy/23cpr/pdfs/23cpr.pdf (released November 2019)

    (signed) Your Trusted Source for Too Much Information

    • Hmm. This seems odd, as we have tree farms, so the total world forest cover is not going to steadily decrease toward zero. I think most wood in the US comes from tree farms doesn’t it?

      For that matter I’m pretty sure there is more forest cover in the US than there was in the early 20th century as farming has moved west and forests have regrown.

      So world deforestation isn’t really one problem; forests in developed temperate-zone nations are kind of a different thing from tropical rainforests that are being reduced (and boreal forests are yet another separate thing…)

      • US imports wood mostly from Canada but also from many places. Yes, US forest cover is increasing, but never will return to “natural”.

        Forest cover rising partly bcz wood is used for fewer things and used more efficiently where it is used. Lots of recycled paper; composite beans; chipboard and other manufactured wood products.

        Apocalyptic environmentalism has always been a farce. The Green Revolution was already decades old when Paul Erlich was born.

        • >>Yes, US forest cover is increasing, but never will return to “natural”.

          Oh, presumably not; I just meant that the planet is not in a “race to the bottom” to zero forest cover.

          And what is “natural” anyway?

          It is now thought more and more that Native American use of fire significantly expanded grasslands, etc. in North America, IIRC. And there were major collapses of civilizations – one around 1300 AD (Anasazi/Ancestral Puebloan and Mississippian/Cahokia both about this time) and then another one when the European diseases hit. So what the first European settlers found in North America (our first written records) may not really be “typical” or “natural”.

        • Yes, we’re on the same page.

          “Natural” is hard to finger for other reasons too: did humans destroy the North American megafauna? If so then to find “natural” you have to go back at least 13K yrs, before large numbers of humans in North America.

          Which wouldn’t be surprising since in Australia people have been using fire to modify the environment for many thousands of years and are likely responsible for the destruction of the megafauna there too.

        • Yes, exactly. There is no “purely natural” North American ecosystem with the current ‘palette’ of species.

          There are other things like this… the Eastern subspecies of Purple Martins is entirely dependent on human-provided nest sites.

          IMO it is more useful to look at biodiversity and ecosystem processes/functions/services than to idealize a state of ‘unaffected’ nature.

          Biodiversity and ecosystem processes are measurable (though imperfectly… we still don’t know anything like all species on Earth) and thus damage to them is determinable.

        • Ecosystem processes and services are measurable, but they aren’t limited by any fundamental “ecological capacity” as Ehrlich and the “Apocalyptics” believe. Ecosystem processes and services can be and have already been substituted for on a massive scale by technology. Farming is one blatant example. That’s where apocalyptic thinking fails – the belief in a system closed to information.

        • I more or less agree.

          Modern agriculture has for example greatly expanded the nitrogen-fixing capacity of the planet. In college 10 years ago I learned that 50% of total nitrogen fixation was artificial (Haber-Bosch process), and that’s probably still about right.

          There are of course “second-order” harms from this (dead zones in coastal areas, e.g. the area of the Gulf of Mexico right around the mouth of the Mississippi River, from oxygen exhaustion due to over-fertilization).

          But it’s not nearly as simple as Earth having a fixed carrying capacity independent of technology.

          Which doesn’t mean there aren’t real and major problems! Climate change for example.

          But I don’t think climate change will lead to the collapse of civilization – and if it did, it would be by indirect political effects (sea level rise -> more refugees -> more conflicts -> one of them goes nuclear, for example).

          I’m pretty sure technological civilization could exist in, say, a Cretaceous climate, and battery technology is good enough now that I think electric vehicles etc. will take over quickly enough that we won’t get to 1000 ppm CO2 or anything.

        • “50% of total nitrogen fixation was artificial (Haber-Bosch process), and that’s probably still about right.”

          I’ll buy that. And the amount of artificial nitrogen fixation is a function of economics, not of some physical limitation of the process, oui? So at higher price, it would increase. And the amount of food available per unit of N fixation is rising because improved crop varieties generate more food and less stalk; travel better; and last longer. They are also preserved better by improved transportation and storage.

          That would be an interesting calculation: the annual number of calories consumed per unit of agricultural nitrogen fixation since, say, 1500.

        • Yeah.

          If we really needed more food production (we don’t right now; world hunger is a problem of distribution, not production) we could probably also do a lot more with genetic engineering of crops than we are doing right now. And the crops we primarily grow in the US aren’t the most efficient in terms of calories/acre either.

          IMO a lot of the major problems (on the human level, as opposed to the biodiversity level) we *will* see from climate change will be due to many parts of the world not having the technological/economic/etc resources to *do* that adaptation.

          If hurricane risks etc. become unacceptable close to the East/Gulf Coast, the US population might shift westward, toward the Great Lakes, and to inland cities like Dallas, Atlanta, Denver, Chicago, etc.

          But places like Haiti or the Bahamas are basically all vulnerable, Polynesian nations are very vulnerable to sea level rise, Bangladesh probably doesn’t have the economic resources to deal with movement on that scale, etc.

        • I used to pose this as a question: what is the difference between a beaver dam and Hoover dam? Is it just scale? That doesn’t make sense, since on a local level the effects of the beaver dam are just as pronounced. A famous ecologist once provided me with the answer: man does not operate by the laws of natural selection. I never found that convincing, but I still don’t have a good answer to the question.

        • Scalability is a huge difference. Beavers are incapable of blocking large rivers. The near absence of salmon in the Salmon River due to human dams are just one example of many ecosystem changes that beavers would be unable to cause through damming.

        • Of course the scale matters. But I don’t think this answers my question. There are small human made dams and entire rivers where a number of beaver dams have had larger cumulative impacts. In those cases, would you call the human dam “natural” and the beaver dams “unnatural?”

        • Beaver won’t dam the Columbia, but as Dale says they are capable of major impacts. In areas like in the Canadian taiga they have a major impact on the distribution of forests and lakes. They build their dams to flood the largest possible area because the pond water provides both protection and mobility. The flooding kills the trees near stream level. As the backup grows into a large pond and then a series of ponds or lakes, more beaver move in and start to decimate the forest around the water. I’ve seen stump farms 30-40 yd wide around a beaver pond, and I’ve seen beaver stumps and trails as high as 50ft above the water on rock outcrops!

        • Note that the words “impact” or “effect” need unpacking. If the sense of “impact” is “but-for consequence”, then both types of dams have impact. If the sense of “impact” is “major change”, then only the human dam has a big impact. I would argue that’s because for the most part the rest of the ecosystem has had time to adapt to beavers but has not had time to adapt to industrialized humanity.

        • “In those cases, would you call the human dam “natural” and the beaver dams “unnatural?””

          Nice, since all I mentioned was the scalability of beaver dam technology vs. human dam technology, without diving into the issue of “natural” vs. “unnatural”.

          Dictionaries and common usage of the english language answers the “natural” vs. “unnatural” question. The distinction is clear and has nothing to do with the comparative impact of one vs. the other. Nor does it have anything to do with ecology as a field of science. I’m really tired of those who dismiss conservation concerns over a disagreement over semantics as a way of dodging the important issues, which might actually require that one learn about the underlying science.

          “A famous ecologist once provided me with the answer: man does not operate by the laws of natural selection. I never found that convincing, but I still don’t have a good answer to the question.”

          Also interesting is that you say you never found the ecologist’s answer convincing. But you never say WHY you don’t find it convincing. Note that presenting a convincing rebuttal just might require that you overturn a few decades of ecological science. But I’m sure you’re up to it. Without even learning anything about it.

        • I think John N-G above is right about the real difference: pace of change. Other animals alter the landscape; elephants destroy trees and limit forest expansion/expand the savanna in Africa, IIRC.

          But the way in which humans alter the landscape (and other ecological processes, e.g. carbon and nitrogen cycles) has changed dramatically in only a few centuries, which is basically nothing in evolutionary time.

          IMO, climate change is largely the same issue. It’s not the *nature* of the change that’s a problem nearly so much as it is the *speed* of the change. In the late Mesozoic and early Cenozoic CO2 levels were far higher and temperatures far warmer. But the *rate of increase* in CO2 and temperature is now much higher, and things can adapt only so fast.

          (I’m not really sure what it means to say that “man does not operate by the laws of natural selection”. Natural selection isn’t really avoidable, even at our level of medical technology; it’s inevitable so long as genetic differences affect survival and reproduction.)

        • I don’t propose to have any answers. The question of what is “natural” and what is not has perplexed me for many years and I am genuinely interested in insights that you and others can provide. But to simply declare that human actions are, by definition, not “natural,” does not help me. We are a species and not immune to natural selection. However, many of our technologies appear to contradict that – though I’ll admit to being confused about that point. For example, many human technologies of aggression (e.g., guns) don’t select the “weakest” in the way that predators select the weakest members of prey species. But I don’t think it makes any sense to say that nothing humans do follows natural selection. So, how do we distinguish between things that are “natural” and things that are not?

          I happen to believe that this is a difficult question. I also believe that it matters, in that humans need to distinguish between acts that are consistent with our natural environment and those that are not. Yet it seems like this, as with many other things, are separated into polar extremes. Some believe that everything people do is “natural” and then dismiss the idea that any human actions are “unnatural,” while others take comfort in the fact that human actions have become so extreme (in scale and other dimensions) that it is safe to say humans are not subject to natural selection so that we move in the “right” direction.

          So, please drop the aggressive tone that I am not offering a convincing rebuttal. I am genuinely perplexed by the question.

        • Dale said,
          “I happen to believe that this is a difficult question. I also believe that it matters, in that humans need to distinguish between acts that are consistent with our natural environment and those that are not. Yet it seems like this, as with many other things, are separated into polar extremes. Some believe that everything people do is “natural” and then dismiss the idea that any human actions are “unnatural,” while others take comfort in the fact that human actions have become so extreme (in scale and other dimensions) that it is safe to say humans are not subject to natural selection so that we move in the “right” direction.

          So, please drop the aggressive tone that I am not offering a convincing rebuttal. I am genuinely perplexed by the question.”

          Well put.

        • Oops, that “Anonymous” above (starting “I think John N-G above is right”) was me.

          It occurs to me that what was probably meant by “man does not operate by the laws of natural selection” was something like “human behavior and its effects on the environment changes far faster than an evolutionary timescale” (since changes in human behavior are driven by culture/learning, not genetics). Not that humans are exempt from natural selection.

          >>However, many of our technologies appear to contradict that – though I’ll admit to being confused about that point. For example, many human technologies of aggression (e.g., guns) don’t select the “weakest” in the way that predators select the weakest members of prey species.

          Natural selection doesn’t necessarily work the way one would expect. Selecting for the “strongest” or against the “weakest” isn’t actually part of the definition.

          They don’t really. Natural selection exists as long as there are inheritable variations that affect survival/reproduction. Our technologies change *what* is selected, but don’t eliminate it.

          In a population where, for example, everyone was genetically engineered, natural selection wouldn’t be relevant (since genetics would no longer actually be inherited). But short of extremes like that it’s pretty much unavoidable.

        • Confused said,
          “Natural selection doesn’t necessarily work the way one would expect. Selecting for the “strongest” or against the “weakest” isn’t actually part of the definition.”

          Hmm — that “one” is ambiguous.

          FWIW: My description of natural selection is “Survival of the fit enough to have survived under the circumstances in which they have so far existed”.
          Not “fittest”, just “fit enough”.

    • I had chance to look at that paper. Embarrassing is right. :( For the authors and for science as a whole.

      You don’t have to read far to find the common fallacies of Environmental Apocalyptical literature:

      1) cherry picks time frame of relationship
      2) freezes human behavior in selected time frame and projects decades/centuries into the future
      (ignores market/pricing responses)
      3) Ignores both existing and future technology (e.g., technology already provides many ecosystem services)
      4) Proposed disaster is safely in the future

      That’s before you even get to the model.

  6. One of my favorite examples of repeat wrong forecasts is life expectancy. While there is much vested interest in underestimating increase of longevity for annuities, even purely academic assessments of the ultimate ceilings of life expectancy have been wrong (always biased low) – for decades. See the fun graphic in Vaupel/Oeppens paper “Broken limits to life expectancy” which you can find here (http://user.demogr.mpg.de/jwv/pdf/scienceMay2002.pdf).

    Some of the forecasts(!) were already wrong at the time of issuing.

    • I don’t think this is really true anymore (and indeed, your link is almost 20 years old). There almost certainly are biological limits to longevity barring major breakthroughs in medicine that don’t seem imminent. Demographers are more engaged with biology and medicine than they have ever been, and that remains the consensus. Life expectancy growth has slowed down globally. Maybe it will speed up again, but I highly doubt there’s MUCH more that can be done for life expectancy in most of the developed world barring things that seem like longshots right now.

    • Funnily enough, it looks like the final projection overcorrected. After years of predicting wrongly that the life expectancy of women in Japan was going to start leveling off, by god they weren’t going to make that mistake again, so in 2001 they finally projected that the observed trend would continue for another fifteen years or so before starting to flatten…and instead the curve pretty much went flat in 2001. The life expectancy of women in Japan today is around 84 years, slightly below the 1999 projection (which the authors of that article claimed was too pessimistic) and well below the 88 years predicted by the UN in 2001, a projection which was praised by the authors.

      gg is right, though (unless the authors were cherry-picking, which is possible): expert forecasts were too low for decades….right up to the point where they were too high.

      Making predictions is hard, especially about the future.

  7. Andrew, you’re being too hard on Samuelson and his chart. I was curious about the chart’s context and, so, found the chart in the 2nd edition of his book published in 1968 (page 3). And in that edition of his book, Samuelson caveats the chart with…

    “The last diagram of this book [inserted reference to this chart in a previous edition] shows in one graphic picture the economic growth prospects for the Soviet Union and the United States. As we shall see, the economist does not have the clairvoyance of the astronomer, who can tell you exactly where all the planets will be in the year 2,000. The economist must give the calculated prudent odds.”

    That is, Samuelson doesn’t appear to be promoting the Soviet GDP forecast necessarily. Just the opposite! He’s pointing out the uncertainty around such extrapolations. Unless Samuelson said something otherwise in earlier or later editions, the criticism here misses the context.

    • Logit:

      Interesting. I’ve not actually looked at all these editions of the Samuelson book. I was basing my criticism on the article by Levy and Peart as summarized by Alex Tabarrok’s blog post. I guess I should look into the details and report back!

Leave a Reply

Your email address will not be published. Required fields are marked *