Skip to content
 

The climate economics echo chamber: Gremlins and the people (including a Nobel prize winner) who support them

Jay Coggins, a professor of applied economics at the university of Minnesota, writes in with some thoughts about serious problems of within the field of environmental economics:

Your latest on Tol [a discussion of a really bad paper he published in The Review of Environmental Economics and Policy, “the official journal of the Association of Environmental and Resource Economists and the European Association of Environmental and Resource Economists.”] got me [Coggins] thinking. People might suppose the gremlins hubbub dinged his reputation, but no. Tol is still considered an elite climate economist. In 2016, when a lineup of heavy climate-econ hitters wrote a Policy Forum piece for Science, Tol was included as a co-author. That paper is safely post-gremlins; his reputation remains intact.

Tol’s damage-related papers get published partly because the academic climate-econ crowd is a bit of an echo chamber. My conjecture is they review each others’ papers and they tell editors to print them. What else is an editor to do? The problem is not just that Richard Tol can write authentic academese.

Why do I think it’s an echo chamber? Because Tol said so himself, in a 2013 JED&C paper from his continuing series. Here he’s explaining why the uncertainty around his results might be larger than it appears: “[T]he researchers who published impact estimates are from a small and close-knit community who may be subject to group-thinking, peer pressure and self-censoring.”

Also, the method Tol introduced in the 2009 JEP paper remains a key part of DICE, the integrated assessment model (IAM) for which William Nordhaus won the 2018 economics Nobel. I’m not sure the connection between DICE and Tol’s paper is much appreciated.

In the 2013 version of DICE, Nordhaus based his monetary climate-damage function on Tol’s 2009 JEP results. That function connects any amount of warming to the resulting loss in GDP. Here are Nordhaus and Storzk, p. 11 of the 2013 DICE user’s manual: “DICE-2013R uses estimates of monetized damages from the Tol (2009) survey as the starting point. . . . I [sic] have added an adjustment of 25 percent of the monetized damages to reflect these non-monetized impacts. While this is consistent with the estimates from other studies (see Hope 2011, Anthoff and Tol 2010, and FUND 2013), it is recognized that this is largely a judgmental adjustment.” Notice that he uses FUND, Tol’s IAM, as a benchmark. My function looks like Tol’s, he seems to be saying, so I’m good. This is the echo chamber.

When describing the revised 2016 version, in his 2017 PNAS paper, on p. 1519 Nordhaus writes: “The damage function was revised in the 2016 version to reflect new findings. The 2013 version relied on estimates of monetized damages from Tol (2009). It turns out that that survey contained several numerical errors (JEP Editorial Note, 2015). The current version continues to rely on existing damage studies, but these were collected by Andrew Moffat and the author and independently verified.”

So, as of 2013, Nordhaus thought highly enough of Tol’s 2009 paper to make those “estimates” the starting point for his DICE damage function. Then he used Tol’s FUND as a comparison to check whether his final damage function looked right. In 2016, when the gremlins paper had been discredited, he pivoted and conducted his own exercise, rooted in Tol’s idea but more sophisticated, and with very similar results. He also continues (p. 3) to apply that mysterious extra increment, just because: “We make a judgmental adjustment of 25% to cover unquantified sectors.”

The Nordhaus-Moffat study incorporates more impact numbers, but is still largely based on IAM results, including those of previous DICE versions. Nordhaus appears to be happy with Tol’s fundamental approach, just not with the execution. But note: for any level of warming, monetary climate damages are smaller in the 2016 DICE, based upon Nordhaus’s results (−0.236% GDP lost per degree warming squared), than in the 2013 DICE, based upon Tol’s flawed 2009 results (−0.267%). Nordhaus’s statistical method is described in the SI to the PNAS paper; I expect a statistician like yourself will find it interesting reading. Also, “independently verified” by whom?

I know you understand that the “data” Tol continues to use for his damage papers, including the REEP paper you reference, are just the numbers that come out of a subset of IAMs, including his own. Nordhaus 2017 used numbers from a different subset of IAMs and some additional studies. Only a few of these numbers can be said to come from a data-generating process based in actual evidence. There is little empirical basis for it, not of the kind a data person like you would recognize.

And consider for a moment the self-referential nature of this enterprise. A bunch of people, including Tol and Nordhaus, build IAMs and produce numbers purporting to show the economic damage from a given level of warming. Those numbers become a “dataset” that Tol uses to obtain a statistical relationship that forms the basis of the 2013 DICE damage function. Nordhaus, in turn, uses similar numbers from IAMs, again including both his and Tol’s, to obtain a statistical relationship that forms the basis of the 2016 DICE damage function.

I’m not a fan of climate-econ IAMs, and I’m not alone. Robert Pindyck has been hating on them, loudly, for several years. Pindyck levels a series of specific criticisms at DICE and the other IAMs, one of which is precisely that the damage functions are not empirically based. “W]hen it comes to the damage function,” Pindyck writes, “we know virtually nothing—there is no theory and are no data that we can draw from.” That’s changing, as more people try to quantify empirically the effect of warming on economic performance, including into the distant future.

My main complaint is this: in the base DICE configuration, off the shelf, the “optimal” level of warming is 4.08 degrees C. This, says Nordhaus, is the sweet spot, as good as it gets. His damage function is just one element driving that result. But compare his number to the aspirational Paris goal of 1.5 degrees warming, or the hard Paris goal of 2 degrees. The recommendations of DICE on one hand, and almost all the world’s countries and elite climate scientists on the other, cannot both be right.

Tol and Sokal teach different lessons. Unlike Sokal, Tol is writing for his own crowd as an esteemed insider. If the method he introduced in 2009 was ever to be truly discredited, the damage function in DICE, as currently configured, would crumble. Also unlike Sokal, the Tol and Nordhaus models really matter. Theirs and one other IAM, Chris Hope’s PAGE, together form the backbone of official estimates of the social cost of carbon, a major focus of discourse around U.S. climate policy. Unlike Sokal’s gambit, this is no academic game.

tl;dr: The real problem is not the gremlins, so much as the people who tolerate them. Gremlins will always be with us: there will always be lazy scholars who are better writers than researchers, who can use the tricks of the trade to three-card-monte their mistakes. The big problem comes when leaders in the field, Nobel prize winners even, people who should know better, decide that they’d rather play nice than go for the truth.

It’s really too bad. Environmental economics is important, more important than some people’s careers or their desire to have an h-index of 3000 or meet the king of Sweden or whatever.

What’s going on here? Tol’s part of the club. People in the club think that other people in the club are absolutely brilliant. Tol’s work must be wonderful, right? He gets so many citations. This is an interesting example because it goes beyond the political left and right: it’s more of a question of the in’s and the out’s. These guys are on the inside, trapped in their own bubble.

Again, it’s not about Tol or Nordhaus in particular: they’re just examples of the larger problem of the circularity of scientific citation and prestige, at least in this subfield.

Let me scream for a moment: THIS IS A SCANDAL!

45 Comments

  1. JFA says:

    “The recommendations of DICE on one hand, and almost all the world’s countries and elite climate scientists on the other, cannot both be right.” It really depends on what is being maximized. My sense is that elite climate scientists don’t have a good understanding of the economic consequences (or even think about their recommendations in terms of economic costs) of their proposals. I also get the sense that most climate economists might not be thinking about tail risk and catastrophic scenarios in the proper way. I would guess that the “insiders’ club” critique leveled against the DICE model in this post could be as aptly applied to the models “elite climate scientists” produce as well.

    • somebody says:

      The sociological forces of academia are definitely at work in both circles. They are both insider’s clubs and are both susceptible to the same kind of careerist exploitation.

      The difference is that we know that the environment economics circle is not just susceptible to being gamed, but has actually been gamed. The bad paper and the propagation of its influence even after it’s been criticized to death has been in front of our eyes. I don’t know of materialized scandal with elite climate scientists. To my layperson eyes though, the methodology of climate science is solid (physical models, empirical weather station data) compared to environmental economics (regression on other people’s regressions, all on the same few data points).

      • Dale Lehman says:

        I think the problems go deeper – and have a lot to do with the sociology of academia, over-specialization, and the influence of outside interests. Environmental economists have, for the most part, adopted the standard neoclassical economics paradigm – they are content with measuring costs and benefits by traditional economic rules. Many effects of global warming do not carry explicit market prices, so economists estimate these missing prices. Their methodology is often ingenious (I’m not being sarcastic). However, they follow the rule that economic measures are based on either maximum willingness to pay or minimum willingness to accept (the latter essentially a cost measure). Environmental scientists, not having the training of economists, rarely find these measures relevant. They would evaluate extinction of coral reefs vs. extinction of frogs on the basis of how they fit into their ecosystems (both extinctions being bad, very bad). Economists would evaluate these in terms of estimating the relative willingness to pay for their continued existence (or the relative willingness to accept compensation for their demise – don’t fret about the difference, it is not really the issue).

        They are indeed speaking different languages. And too often not speaking to each other. That has a lot to do with academic politics and specialization. It also has a lot to do with who stands to gain and lose from the debates and ensuing policies. The Tol and Nordhaus practices take place against this larger backdrop. I think the above description of the echo chamber is illuminating. But fixing the echo chamber will require fundamental changes in how academia operates – both inside and its relationship with industry, policy makers, and the public. I’m not optimistic about the prospects. One of my favorite Aldo Leopold quotes: he is using the metaphor of the meadowlark, who can be mistaken for a pheasant, and says “There is a danger in the assuagement of honest frustration; it helps us forget we have no yet found a pheasant. I’m afraid the meadowlark is not going to remind us. He is flattered by his sudden importance.”

  2. yyw says:

    Why would anyone take predictions of economic impact of climate warming decades into the future seriously considering all the uncertainties involved? Even if the research question is to estimate the impact today assuming everything else being equal, does anyone take a number like 0.236% per degree seriously? The whole exercise is an academic game (with potentially serious implication if the public takes it seriously), in the sense that not one person involved risks being embarrassed by their predictions turning out to be completely off.

    • Jonathan (another one) says:

      You have to take the predictions seriously because otherwise you can’t answer what really is the only question: will more human suffering be caused by stopping climate change or by accommodating to it? Granted, the point estimate is overprecise…. But any distribution of values will have a point mean. That doesn’t mean you have to *use* the point mean as opposed to using the whole distribution; indeed, that is a robust part of the debate.

      • This is a good point, I had this discussion here on the blog a while back and was summarily accused of trolling. No one wants to do what is needed for decision theory because it requires considering rather extreme events that don’t look anything like what anyone thinks is actually likely to happen.

        In Bayesian decision theory you take the complete distribution of what could happen under each action you might be able to take, you figure out a cost associated with each event across that distribution, and you choose the action which results in the lowest expected *cost* (or if you frame it in terms of benefits, the highest expected benefit, one is just the other multiplied by -1).

        The costs associated with what happens if we get 1C or 1.5C or 2C or 4C of warming are essentially *not even relevant to the calculation*.

        Suppose x is mean temperature in Kelvin (so that if a mean temperature today is around 74F or 23C, then it’s 296K or so…

        You’re looking at integrate(cost(x)*p(x),x, -237,inf)

        Obviously, the mean temperature on the earth isn’t going to drop to absolute zero, nor is it going to increase past the temperature of the corona of the sun… But it doesn’t matter because in those regions the p(x) drops so close to zero that they don’t enter into the calculation, and the proper thing to do is to calculate this integral… and if it diverges then it’s because p(x) is wrong, not because the limits of the integration are wrong…

        Let’s for the moment say that temperature dropping more than say 2 C causes p(x) to get very small… and the cost associated is not very large, because the main thing that would happen is we’d get more snow around the poles or something and there’s not THAT much economic stuff around the poles compared to more temperate regions… Whereas increasing temperatures tend to cause dramatic problems in the temperate zones where most economic activity is…

        Let’s change variables to “temperature change in C compared to 20’th century average” which just involves shifting things… so now the limits of integration are something like -290 to inf (I haven’t looked up exactly what the 20th century avg is… it’s not that relevant, there is no probability mass down near -290….

        Now, we’ve decided that the left tail, below 0, drops off relatively quickly, while the cost climbs slowly until you get down into the “ice-age” territory which has so little probability density that it’s irrelevant… From this perspective we get very little error in the integral, if we do the calculation from x = 0 to x=inf

        On the other hand, the cost of rising temperatures, as temperatures rise into the range like say Venus… must asymptote to “the willingness to pay to avoid extinction” (and I’ll let Dale Lehman discuss the problems with this concept above… this is just to point out how even if you accept the methodology, the calculations are far wrong).

        Avoiding extinction should be at least worth the Gross World Product times say 100000 years…. Which, looking it up on wikipedia and remembering that we’re taking as given the methodology… the GWP in 2014 was 7.8e13, so we’re talking 7.8e18 is at least some kind of estimate of the asymptotic value of the cost function…

        So, let’s take the cost function as something like 7.8e18 * 1/(1+exp(-(x-8)/2)) reflecting the idea that for small changes, we don’t see too much cost, but things rise rapidly out in the range of 4 or 5 C and extinction occurs around say 15C…

        in R, you can plot this: curve(7.8e18*1/(1+exp(-(x-8)/2)),0,40)

        Now, let’s multiply this number by a probability density that’s just a simple normal(1.5,2) truncated to the positive side, reflecting the typical estimates of 1.5 or so rise in temperature, and a tail that we haven’t thought about much…… and plot the resulting product, which is the integrand of interest:

        curve(dnorm(x,1.5,2)*7.8e18*1/(1+exp(-(x-8)/2)),0,10)

        So we should be considering temperature changes out into the range of about 10C to get an accurate integral…

        Now, let’s look at a bit wider and fatter tailed distribution, since climate change is that kind of thing… let’s use a t distribution with 5 degrees of freedom, with the same peak density value of 1.5C but a scale of 4 and 5 DoF:

        > curve(dt((x-1.5)/4,5),0,40)

        And look at the integrand:

        > curve(dt((x-1.5)/4,5)*7.8e18*1/(1+exp(-(x-8)/2)),0,40)

        To calculate this integrand correctly you’ll have to look at temperature changes out to about 20-30C, and at least half of the integrand looks to be above 9C

        At this point most people everywhere in academic climate debates will begin to sharpen the pitchforks, because 30C makes the ocean look like a hot tub and even 9C is truly catastrophic. So they will simply truncate this distribution to say 6C and hand wave about how “everyone knows that temperatures won’t get that high”

        Of course, 50M years ago, shortly after the extinction of the dinosaurs, the temperature was about 12-16C higher than today:

        https://en.wikipedia.org/wiki/Geologic_temperature_record#/media/File:65_Myr_Climate_Change.png

        I’m not saying the ocean is going to be a hot tub… I’m just saying *because the cost function is a function that is near 0 for epsilon temperature change, and is near 10^18 for temperature changes around 10-15C, and asymptotic for higher temperature changes…. We need a *specific, believable probability distribution function p(x) for the range 0 to 30C which we have a good reason to believe… Otherwise what we are doing is pure and unadulterated WANKING.

        • Anonymous says:

          typo: … to calculate this *integral* correctly given the integrand….

          Dan

        • Terry says:

          “Avoiding extinction should be at least worth the Gross World Product times say 100000 years.”

          ?

          Almost no one seems to care much about declining fertility rates, so this is not a slam-dunk assumption.

          • We will not go extinct due to declining fertility rates, I assure you. Declining fertility is exactly what you expect as you reach carrying capacity. I Expect some oscillations, and then for the human population to stabilize rather than continue growing exponentially.

            • Terry says:

              The probability of extinction is exactly zero in all possible scenarios? You know this … how?

              If the population drops by 50%, isn’t that the extinction of half the human race? Then, according to your calcs, avoiding such a fate “should be at least worth half the Gross World Product times say 100000 years”

              • If you want to argue for an alternative cost function, fine. in fact uncertainty in the cost function is fine too. The main point is we need to provide a logical argument, with reasons why the assumptions should be considered acceptable.

            • Wonks Anonymous says:

              In a Malthusian model, population expands until it reaches carrying capacity, but that doesn’t seem to fit well in the current world. We see wide variance in fertility, with the most fertile not being those with the most resources. So carrying capacity doesn’t seem to be the cause of declining fertility. I don’t think the existing decline will cause extinction though, as long as a subset continue to have above-replacement fertility.

              • A literal physical carrying capacity isn’t quite the right idea, it’s more like a political limit on housing and other resources. For example people in my general area (Pasadena area of CA) are moving out if they want kids (so that the school district has steadily declining enrollment) because the cost of a house big enough to house kids is too high. But the reason it’s too high is because of the high population relative to the available housing… We could physically build a lot more housing, but not politically, so in some sense the “carrying capacity” in terms of housing for example is more related to political negotiation than physical limitations. This is made worse by things like CA prop 13 which makes it financially unsound for “empty nesters” to move out of big houses.

                There are places in the developed world where we’re reaching a limit which is imposed by resource limits, but mediated through pricing and politics. Short of forcibly steamrolling people’s houses, and building condos on top, we aren’t going to get more housing in these areas soon. Moving to other areas involves giving up lots of other goods (like access to inexpensive shipping, telecom, and other infrastructure). So, we’re seeing an oscillation effect IMHO because of an imposed barrier to fertility through price feedback. For example, I think we will see a boom in fertility in the US after the baby boomers die and free up housing.

                I too am unworried about extinction from lack of fertility.

        • Bob says:

          What has taking the expectation got to do with anythin?

          • in a Bayesian Decision Analysis you want your decision to be sensitive to all the possible outcomes that might occur… The only reasonable way to do this is to integrate. Further, Wald’s theorem shows you that the class of Bayesian Decision Rules dominates any other rules (gives uniformly as good or better decisions).

            So, if you have for example 3 options: do nothing, ban all coal production, and alter all power production to solar…

            you calculate the expected value of the cost under each scenario, and choose the scenario whose expected cost is least (or expected benefit is most).

            In such an analysis the cost function is obviously rather complicated and multi-dimensional, as is the outcome space over which you take expectations… and the probability model should reflect these things.

            This is something that takes a lot of effort to construct. But the alternative to good statistics is bad statistics… you don’t get to just ignore the issue, because that in and of itself is bad statistics…

            • Bob says:

              But it’s a once off event, and you don’t know the probability distribution, and you don’t know the cost function.

              • Welcome to every decision ever… Which house should I buy, what treatment should I do for my back pain, which new product should the company develop, where should we still for oil, which person should I date and marry?

                First off it’s a mistake to think that “the probability” is a physical thing. It’s a state of information, and it changes as you acquire information, as well as when you use logical argument to deduce things that were not obvious.

                Second of all, the cost function too is not a single thing extant in the world, it’s a value judgement, and it too can change by argumentation and by logical deduction. for example if some bad things are implied by an outcome but you don’t realize it, once you realize it your assessment of that outcomes cost changes.

                The goal of a decision analysis is to arrive at a cost assessment that represents a reasonable consensus about the value judgement. This is actually easier than it usually seems, but it’s not easy, and it requires discussing values, which people often shy away from… how much do future generations matter? How much do third world citizens today matter? do whales or elephants matter, and how much? What about people today who have no intention of having children, on a thousand year scale do their desires matter? etc

                As for the probability function, we need to construct one. GCMs are one kind of input to that process, but other simpler models can offer a lot of value as well. What we need more than anything is a rate of decay of the tail events, the typical events in the core basically don’t matter much because the costs associated are so relatively low… unless you want to argue that extinction level events like a global nuclear war don’t matter much more than say the great depression, but I think few people will agree with that.

                so, it’s a hard problem, but the way forward is clear, to me at least. However that clarity seems rare across physical scientists, economists, and laypeople… I think the group most likely to understand my perspective here are civil engineers. This is our bread and butter. we design Bridges and stadiums and hospitals specifically with costs in dollars and lives all the time… at least the PhD level engineers who develop design methodology would agree I think.

              • Justin says:

                “First off it’s a mistake to think that “the probability” is a physical thing. It’s a state of information, and it changes as you acquire information, as well as when you use logical argument to deduce things that were not obvious.”

                Actually, mistakes can be thinking that states of belief trump physical probability, since beliefs can be mistaken, even strongly held beliefs (for example, Jeffreys’ prior=0 on plate tectonics not being real, or Fishers’ on effects of smoking for that matter), and that logical probabilities are sensible always since logic only gives us internal consistency and it is easy to make up many logical things that don’t even exist in reality. If I shave a corner off a cube die, for example, it alters the probability from a physical change, not my belief in anything and not from logic.

                Justin

              • Shaving a corner off a die alters the physical properties of its motion:forces, mass, friction, etc. It doesn’t alter the “physical probability” because there is no such thing. However, one way to discover information is to run repeated experiments. Throwing the die 100 times can allow you to acquire information about the way in which the die will roll. it’s also conceivable you can acquire this information through computer simulation, or even using clever Newtonian physical arguments.

                however it is also possible for a knowledgeable trickster to throw a die in such a way as to affect the outcome, just like Percy Diaconis’s coin flipping machine will reliably flip a certain outcome… This shows that the frequency of a given outcome is not a property contained in the atoms of the die.

              • When it comes to the real world, essentially all probabilities are states of information. Even quantum mechanics has this nature if you accept David Bohm’s formulation: particles are real and have precise positions and momenta that are manipulated by a nonlocal wave. The “apparent randomness” of QM under this theory is just a consequence of not knowing the precise positions and momenta etc.

                This becomes clear when you study cryptography. Even a computer generated stream of random numbers is only “random” if you don’t know the cryptographic secret. A good crypto random number generator is always collecting additional information that you are unlikely to know, like the timing of hardware interrupts, and mixing it into the seed information, so that the assumption “the attacker doesn’t know the secret” becomes true after a certain time.

                A coin flip or a die roll is just the same kind of thing as a crypto random number generator. It’s a physical device where the laws of physics ensure that trajectories that are nearly equal rapidly become different through time, so that to predict the outcome you must know “the secret” which is a large set of precise facts, such as the 6 dimensional initial position and orientation, and the 6 dimensional initial linear and angular momenta that you have to know to say 18 decimal places. Even if you assume you know everything about the environment the die will bounce off of, these initial conditions are essentially 18/log(2) * 12 ~ 312 bits of secret information you need to know to predict the outcome reliably.

                The concept of repeated experiments is useful when they are possible… rolling a die a lot can tell you it doesn’t have any kind of dramatic asymmetry of mass distribution for example. But for something like decision making about policy related to energy consumption and its affect on global warming… we simply don’t have the luxury of running the next 1000 years over and over again under different policies to see how often different outcomes occur. Instead we need to shape our state of information from things like simulation, more limited experiments, data collection on short term changes, etc.

            • Justin says:

              “Further, Wald’s theorem shows you that the class of Bayesian Decision Rules dominates any other rules (gives uniformly as good or better decisions).”

              That assumes the prior(s) you chose is(are) ‘correct’ in some sense. It(they) could not be, of course, however. It(they) could be ‘brittle’.

              Justin

              • Wald’s theorem just says essentially that for any non Bayesian decision algorithm there is a Bayesian algorithm that does uniformly better. What this tells you is you can stop looking for non Bayesian methods and focus your effort on choice of Bayesian model and cost function.

                Non Bayesian decision rules are like driving with a blindfold on. it’s proven to be worse in every way than the general process of driving with your eyes open and alert. Of course it is still possible with your eyes wide open to steer directly towards a cliff… this doesn’t prove that blindfolds are a good thing, only that we should drive with our eyes open and also only let people drive who don’t have a morbid fascination with plummeting to their death.

      • jim says:

        “You have to take the predictions seriously because otherwise you can’t answer what really is the only question”

        But because the predictions aren’t likely to be anywhere near reality you can’t take them seriously.

        YYW is right: The real issue is that people are cranking out these inane projections with their gazillions of unrecognized and unsupportable assumptions and calling it good.

        It’s wrong to think of these projections as “economic” or a “climate” projections. That’s absurdly reductionist. They’re a projection of the evolution of entire societies, including both social trends and technological innovation. The lesson from history is that it isn’t possible to predict technological or social change.

        You don’t have to look back in history very far to see that the future isn’t predictable. Only 7 or 8 years ago the thought of the US becoming an oil power again was idiotic. Yet here we are, among the world’s top producers, and even now still held back from producing more by environmental regulations.

        This of course is aside from the fact that economists can’t even predict a simple recession until after it happens, so WTF would anyone believe a prediction 50-100 years into the future.

        People should know better by now than to put any stock in this crap

        • jim says:

          in fact there is one thing that you can say about AGW/Climate/Environmental predictions: they’re all going to be wrong.

          • Sure, they’re all going to be wrong, just like if you predict 100 values for the mean quantity of change in the pockets of Los Angeles county residents at 9:40am Fri nov 1 local time… none of those values will be exactly correct…. but it’s still possible to do a good job of quantifying the probability of a certain amount of wrongness and the consequent costs, and do a bayesian decision analysis… but it requires *completely different* methodology than what we have now, and it requires taking possibilities seriously that we don’t get anywhere close to today.

            • jim says:

              You’re right to the degree that you know the system. But the system is going to morph into forms we can’t conceive.

              Let’s go back to, say, 1900. We have a few cars bobbing around, but they’re still a novelty. Oil production has barely been around 30 years and Standard Oil still makes most of it’s money selling kerosene for gas lamps (I recently read a biography of Rockefeller – absolutely fascinating!). The Wright Brothers haven’t even heard of Kitty Hawk yet so its not yet known if motorized flight will ever happen (I read McCullough’s book on the Wright Bro’s too).

              Now, who, in 1900, would have imagined submarines cruising in force in the Atlantic, only about 12years away? Or radar? Or gigantic bombers, only a few decades away? Or that that there would be concrete freeways in arcs through the sky in every major city in the US 50 years hence? Jet liners screaming through the sky carrying people by the millions?

              • All excellent examples of why this kind of model needs to consider potentially outlandish tail events, in BOTH directions… we could invent a photoelectric organic material with 25% efficiency for $.02 / watt manufacturing cost next year… and we could have a massive methane meltdown… tail models are hard but not impossible

              • jim says:

                “we could invent a photoelectric organic material with 25% efficiency for $.02 / watt manufacturing cost next year… and we could have a massive methane meltdown”

                Yet these are things we know. What about, in the parlance, the “unknown unknowns”? How do you imagine those?

              • You don’t need to imagine what they are, just what their effect is on probability and cost.

              • jim says:

                Their effect on probability of what? That the answer is “42”? :)

                Why are we modelling anything? The historical record is a better predictor than any model could possibly be. It says this:

                “Vanishingly small chance humanity will be destroyed or catastrophically reduced. Negative events on the scale of the Black Death or Amazonian Depopulation are possible and may have greater impact on specific geographical areas, ethnic groups or cultures. In the end human technology will improve and overcome any negatives, making the human condition better than before. These events will occur over a few decades or so to a few centuries or so.”

                You can’t do any better than that with a model. All you’ll do with a model is fool yourself into thinking you know more than you do.

              • That *is* a model… and your argument for it is nonexistent

        • Jonathan (another one) says:

          “Taking them seriously” doesn’t mean “believing them.” In fact, the best argument for accommodation rather than thwarting global warming is that if there is no global warming, there’s no action to take. Accommodation has the advantage of allowing some uncertainty to resolve before choosing action. The more uncertainty, the more likely accommodation is the better strategy.

  3. Francis says:

    Wouldn’t California’s experience over the last decade start to provide some empirical data? On the one hand, GDP is up. But on the other, losses due to fire and drought are up too.

    • jim says:

      Interesting that, as California’s board-foot production of timber has declined, fires have increased. Yeah, yeah, I know the old saw about correlation. But these aren’t unrelated variables.

      Cal M-board-foot lumber production:
      2005:  2,590
      2015:  1,957

      Some sources suggest the decline since 1990 is quite dramatic, but I can’t dig up data for that so I’ll go with what I can find.

      So Cal’s GDP could be larger with smaller fire risk if it were producing more of its own lumber.

  4. Martha (Smith) says:

    Andrew said,
    “What’s going on here? Tol’s part of the club. People in the club think that other people in the club are absolutely brilliant. Tol’s work must be wonderful, right? He gets so many citations. This is an interesting example because it goes beyond the political left and right: it’s more of a question of the in’s and the out’s. These guys are on the inside, trapped in their own bubble.

    Again, it’s not about Tol or Nordhaus in particular: they’re just examples of the larger problem of the circularity of scientific citation and prestige, at least in this subfield.”

    Maybe TTWWADI (That’s The Way We’ve Always Done It) needs to be supplemented by TTWTLBGDI (That’s The Way The Last Big Guy Did it).

  5. Dan F. says:

    It starts with the fundamental premise of all current methods for nonexpert evaluation of research productivity – that more citations means better work. This is obviously false and everyone knows it, evidence being that everyone normalizes for area of research, and no one expects high energy physics papers to get as many citations as social science papers, however good. Why? The accessibility barriers are higher. Put in more condescending terms, one has to know a lot and be very smart even to read most papers in high energy physics, and the same is not true for the social sciences. More seriously, it might simply be that citations measure a host of social phenomena, something like likes in instagram, and to first approximation mostly detect accessibility. Maybe there is a threshhold – there should be some citations before one starts judging – but certainly the basic operative hypothesis that more citations means better work is more questionable than most of us seem to think.

    Moreover, the premise seems to be based on a basic logical fallacym that of inverting implication. While it’s true that the majority of really good researchers are highly cited (contextualized for area!), it doesn’t follow that the majority of the highly cited are good researchers. Of course this fallacy is the essence of Bayesian reasoning (if A implies B and we observe B this alters our prior for P(A|B)) and there’s nothing wrong with making it the basis of inference, so the problem is really in how we set our prior. We are far too optimistic that lots of citations means someone is good.

  6. Ian Foster says:

    When I worked briefly in this space, my favorite example of the “echo chamber” was a paper by Dr. X which referenced a MS thesis for an important parameter. I eventually managed to get hold of the MS thesis, and was disillusioned to discover that it gave as the source of the parameter “expert elicitation from Dr. X.” In other words, this parameter, like many in this space, was made up from whole cloth—-and Dr. X had gone to some lengths to hide that fact.

  7. Terry says:

    Careful.

    Tol is a bigwig at the IPCC. (He was a coordinating lead author for the IPCC Fifth Assessment Report Working Group II: Impacts, Adaptation and Vulnerability.) The IPCC is widely regarded as the “gold standard” when it comes to climate science, and attacking the IPCC is a hallmark of climate-deniers.

    Tol is also a pioneer in applying insights from the replication crisis to climate science: he wrote that “the IPCC is alarmist because it favors initial scientific papers published on an issue, rather than the follow-up papers, which, he says, tend to “pooh-pooh the initial drama.”

    https://en.wikipedia.org/wiki/Richard_Tol

    • Peter Dorman says:

      First, the primary criticism of IPCC synthesis reports is not that they are alarmist, but that they sidestep concerns for which the evidentiary base is uncertain or contested, which is exactly the opposite. This can be seen, for instance, in their projections of sea-level rise based on thermal expansion while setting aside potential instability of ice sheets. The quantitative passages consider the first and leave out the second. Tol is entirely wrong about this.

      Second, the participation of researchers in the IPCC structure does not necessarily translate into the incorporation of their work in the core conclusions. If you look at WG II you will see IAM/monetization output sequestered in a box, from which it has no input into the rest of the report. It’s true that IAM’s undergird the mitigation pathways literature, but these are just scenarios of mitigation measures and atmospheric carbon concentration trajectories; there is little to no monetization of costs. Tol, and even Nordhaus, are not part of the analytical mainstream of this WG.

  8. John Williams says:

    If you are interested in this thread, you may be interested in Ruth DeFries et al. (2019), “The missing economic risks in assessments of climate change impacts,” available via Google Scholar.

Leave a Reply