Skip to content
 

Gremlin time: “distant future, faraway lands, and remote probabilities”

Chris Wilson writes:

It appears that Richard Tol is still publishing these data, only now fitting a piecewise linear function to the same data-points.
https://academic.oup.com/reep/article/12/1/4/4804315#110883819

Also still looks like counting 0 as positive, “Moreover, the 11 estimates for warming of 2.5°C indicate that researchers disagree on the sign of the net impact: 3 estimates are positive and 8 are negative. Thus it is unclear whether climate change will lead to a net welfare gain or loss.”

This is a statistically mistaken thing for Tol to do, to use a distribution of point estimates to make a statement about what might happen. To put it another way: suppose all 11 estimates were negative. That alone would not mean that it would be clear that climate change would lead to a net welfare loss. Even setting aside that “welfare loss” is not, and can’t be, clearly defined, the 11 estimates can—indeed, should—be correlated.

Tol’s statement is also odd if you look at his graph:

As Wilson notes, even if you take that graph at face value (which I don’t think you should, for reasons we’ve discussed before on this blog), what you really have is 1 positive point, several points that are near zero (but one of those points corresponds to a projection of global cooling so it’s not relevant to this discussion), and several more points that are negative. And, as we’ve discussed earlier, all the positivity is being driven by one single point, which is Tol’s own earlier study.

Tol’s paper also says:

This review of estimates in the literature indicates that the impact of climate change on the economy and human welfare is likely to be limited, at least in the twenty-first century. . . . negative impacts will be substantially greater in poorer, hotter, and lower-lying countries . . . climate change would appear to be an important issue primarily for those who are concerned about the distant future, faraway lands, and remote probabilities.

I’m surprised to see this sort of statement in a scientific journal. “Faraway lands”?? Who talks like that? I looked up the journal description and found this:

The Review of Environmental Economics and Policy is the official journal of the Association of Environmental and Resource Economists and the European Association of Environmental and Resource Economists.

So I guess they are offering a specifically European perspective. Europe is mostly kinda cold, so global warming is mostly about faraway lands. Still seems kinda odd to me.

P.S. Check out the x-axis on the above graph. “Centigrade” . . . Wow—I didn’t know that anyone still used that term!

143 Comments

  1. Oliver C. Schultheiss says:

    Well, if at least he had written “centigrade” — the graph even manages to misspell that as “centrigrades”.

    • Andrew says:

      Oliver:

      As the economists like to say, incentives matter. As long as the author of these papers keeps getting them published, I guess he has little motivation to change. What the journal editors are thinking, I have no idea.

  2. For what it’s worth, this paper has many, many problems beyond the perennial issues with its data set. I began examining the paper and discussing a number of these errors, but the paper was such a mess I wound up giving up. You can see the beginning of my examination here:

    http://www.hi-izuru.org/wp_blog/2018/03/how-do-you-mess-that-much-up/

    Part of the reason I gave up is I realized all of the errors I was finding were also present in a 2015 paper Richard Tol had published, meaning for three (now four) years Tol had been making the same stupid mistakes. That’s discouraging enough to make me lose motivation. Tol’s papers don’t stand up to the slightest scrutiny, but he keeps getting paid to publish them with the same material over and over, no matter how much is wrong with it.

    Oh well. I do find it somewhat fascinating to see just how much Tol can manage to screw up. If nothing else, it makes one wonder what kind of review is going on at these journals.

    • Tom M says:

      From the link above: “If you poke around in the data file a bit, you can find a column for the T^2 model used to come up with this result. It multiplies each data point Tol uses by the coefficient of his model […] This seems unremarkable unless you think to check what grid cell M75 in his data sheet is.”

      You can screw up calculations with any software, but Excel really excels at it.

      Also, I think this is the best example I’ve seen of (Andrew’s version of) Clarke’s law: Any sufficiently crappy research is indistinguishable from fraud.

  3. Terry says:

    Why does Tol need to fit any trend lines at all? It is just a handful of points. Just talk about the handful of points. “Little or no effect for small temperature changes (less than 2 degrees), negative effects above 2 degrees, negative effects get larger the larger the temperature change.” Is that so hard?

    Then plot the confidence intervals from the few studies that have them and say something about how it might be much worse if there is a large temperature increase.

  4. Steve says:

    There once was a Princess in a faraway land called Alabama. Everything was poor and covered with red clay, but then climate change happened. Florida fell into the sea, and the Princess suddenly had acres of beach front property. Everyone lived happily ever after. (Even in Faraway times, climate change ends up positive.)

  5. Anoneuoid says:

    Reminds me of those “dot plots” the federal reserve makes every couple months.

  6. Chris Wilson says:

    The other thing I find horrible about Tol’s work here is the message of ignoring ‘tail risk’ with climate change. Pretending that there’s some smooth, linear damage function as we sail up to 6C of warming, that we should all take seriously. If there’s any trend with updates to climate science in the past 20 years, it is the articulation of a variety of positive feedbacks in the climate system that are in aggregate quite alarming. That, and accelerating the timeline of impacts…For example, somewhere between current warming and 6C of warming we may be talking about eventual total loss of Greenland and West Antarctic ice sheets, which could quite plausibly involve multi-meter sea level rise within the century. We’re gonna struggle with 1m of SLR, let alone 5!
    https://www.atmos-chem-phys.net/16/3761/2016/acp-16-3761-2016.pdf
    Thinking of how many assets are coastal and/or near-coastal, and how much non-linearity and complexity dominates our societies and economies, there is no way we are plausibly only talking about -7.5% on some scale of economic well-being by 2100. Most other approaches to estimating climate change ‘damage function’ I’ve seen at least specify something exponential, even if they also seem to ignore the full distribution of risks.

    • Anoneuoid says:

      I think you go too far in the other direction.

      “Loss” of those ice sheets would uncover an entire new unpopulated contintent bigger than the US. There would probably be a worldwide drop in energy/resource/land prices and associated economic and scientific boom. Not to mention the scientific and (possible) cultural value of seeing what is under all that ice.

      The shame would be letting all that uncontaminated freshwater melt into the sea instead of harvesting it and distributing it around the world whereever it could be best used.

      • sure that’s a plausible story about Greenland itself… but then the homes and businesses and infrastructure of maybe 60% of the world will be underwater or otherwise unsuitable with a 5m rise, so we need a global perspective.

        • Anoneuoid says:

          I was thinking of Antarctica. And I think these types of speculations are largely a waste of time because there are so many unknowns.

          I don’t know where the estimates in that chart are coming from but, eg, there was that recent post here about global warming and cultivating grapes. I looked at the most recent paper from that group and they had tested what would happen if you increased temperatures (to the point expected after doubling CO2) without the increased CO2 (plant food). No doubt it is dozens or hundreds of little pieces of questionable info like that incorporated into these estimates of the economic effects.

          I also notice that this analysis assumes that without climate change there would be no “welfare-equivalent income change” for other reasons. It is possible all the effects on that graph are well within the uncertainty on that anyway. I mean the US treasury expects to be using 100% of new debt to pay interest on the old debt in only 4-5 years. I can imagine degeneration of this situation can have a quite larger effect than anything I see in that graph. What was the “welfare-equivalent income change” during the great depression, or 2000-2009, etc?

          I haven’t seen anything that convinces me that anyone knows how it is going to work out. However, in general I would consider the worst that could happen and would advise to prep for all types of natural disasters. Store food, store fuel, store clean water, build bunkers, decentralize, etc.

          • Chris Wilson says:

            Yes, but we do not need accurate forecasting to justify a full klaxon emergency mobilization. We just need precautionary principle suitable for the nature of risk involved. The next layer is to specify a distribution over outcomes that are even remotely plausible, which in this case includes total human extinction by 2100 at one edge, and modest damage at the other.

            • It’s an interesting case for Bayesian decision theory. Since we have so much uncertainty, we need to consider cases that might cause approximately infinite badness such as total human extinction. Bayesian decision theory basically says that if there are actions you can take that eliminate the possibility of the near infinitely bad outcome, you should put all your effort into that. Unfortunately this runs into meta-problems: no one believes that the extreme models are well calibrated. Like for example I might say that the voltage on a given circuit is normal(10v,2v) but this implies some small probability for negative voltage. If I know for sure that zero or negative voltage on this circuit isn’t a possibility, I know my model is wrong… but I might use it anyway, because I’m never going to see those pathological cases anyway…

              but suppose when the voltage goes negative it triggers a nuclear explosion…. now I need to have a very believable model for the far tail of the real-world distribution or I have to treat my situation as if it might trigger a nuclear explosion, and shut down the whole machine until I can be absolutely sure that the circuit is shielded and protected with multiple diodes and etc.

              • Chris Wilson says:

                I completely agree Daniel. I wish we had more emphasis on Bayesian decision theory with a range of ‘utilities’, and less on model selection via log likelihood or whatever.

              • I find this fork bewildering. There is nothing in any aspect of climate science which suggests humans might go extinct due to climate change, much less that it could happen in less than a century’s time. Creating a Pascal’s Wager scenario is always suspect, but it is bizarre when done for something there’s absolutely no reason to believe might be true (and many reasons to believe isn’t true).

              • Doesn’t need to be extinction to dominate the decision, let’s say there are 7B people and if 5m of sea level rise, desertification of rainforest, and a few other things occur, the result is say 6B of them die and the remaining live like current African dirt farmers… the cost is so enormous that it still dominates the decision if it has more than an epsilon of probability. so now we need a good model for the deep tail of a plausibility distribution… a model we don’t have.. GIGO…

              • Put another way, I think this is a good scenario for the skepticism that OJM often has for the idea of probability distributions over models.

                If you can credibly list a series of models which cover the range of what you really are likely to want to consider, you can run Bayesian analyses and come up with meaningful decision theory.

                But if you know going into the whole thing that you haven’t even got close to a meaningful model, then any analysis is GIGO. Decision theory isn’t really any better than “do what I really feel deep down inside after praying to the sun god” or whatever. You know ahead of time that you don’t really trust your models, so why should you trust the decisions that come out of some formal process? you shouldn’t.

              • Chris Wilson says:

                Brandon, I suggest you are not very well in touch with where the climate science community is at, at this point. Daniel, I agree (again :)). I think this is logically an argument for a strong form of precautionary principle approach. The probability distribution over model/forecast space is going to be dubious at best for quite some time.

              • Take Richard Tol’s graph above. Ignore his own one data point up at the top. Now, consider these estimates from some other people for the “welfare change in percent” if we get a 6K increase in global mean temperature, someone thinks it will reduce our human welfare by about 6%?

                Trump could cut human welfare in the US by 6% just by doing something stupid with Chinese trade on his own. I gotta believe that indundating the coastline of the US over a few decades would cut US human welfare in half. It won’t be totally devastating, but it could easily put us back to say pre WWII levels of development as we lose silicon fabs and power plants and tens of millions of homes and universities and etc etc. So again, GIGO.

              • Daniel, if there were any scenario in which 75+% of the human population would be eliminated by climate change, you might have some point. There isn’t though. I doubt scientists could even concoct a scneario in which sea levels rose by five meters in 50 years like you suggest. It certainly isn’t a scenario considered remotely plausible by the climate science community.

                Chris, I suggest people who rely on responses in the form of, “I suggest you don’t know what you’re talking about” probably don’t know what they’re talking about. If anyone actually believes climate science suggests five meters of sea level rise in the next 50 years is possible, much less plausible, I’d advise them to get intimately familiar with things like the IPCC reports so they can cite exactly what references support that idea.

                I’d also advise them to write a bunch of angry letters to climate scientists around the world for failing to tell everyone the world as we know it may effectively end in 50 years. After all, if you look at what the IPCC reports say, it’s clear mainstream climate science doesn’t support such outlandish claims.

              • As a side note, I find it weird to have people suggest I don’t know what climate science says about the dangers of global warming on this thread, of all threads. This post is about a topic I’ve examined and discussed in depth, possibly more than anyone else in the world. Now, this post is about a segment of Richard Tol’s work which is only about the *economic* impacts of climate change, but I’m pretty sure, “The continents as we know them will no long exist” is something which would have come up in the economic discussions.

                I know a handful of alarmists spread a meme a decade or so ago the IPCC is overly conservative and completely underestimating the risks of sea level rise, but that was a meme spread by something like a dozen people. The IPCC, ostensibly representing climate science, responded by publishing new reports which continued not to support such alarmist claims. The IPCC published its latest report less than a year ago with projections nowhere close to what’s being claimed here. I have the report chapters saved in a folder on my desktop (part of another strange story).

                If one wants to claim mainstream climate science is wrong, so be it. But it’s really weird to have people say you don’t know what climate science says simply for repeating what the IPCC position on an issue is.

              • Anonymous says:

                Brandon,

                Chris and Daniel cite to Trenberth, Oreskes, Hansen, and Mann as being more authoritative than the IPCC’s reports (they are the scientists in the links they provide to support the assertion that the IPCC is “hugely conservative”). Do you agree? Wasn’t there something about Mann winning a Nobel prize? Should that be factored in somehow?

              • Anonymous says:

                Brandon,

                I see you answered my question in your post above. So nevermind.

              • we are explicitly talking about what the tails of the distribution look like. of course no one thinks 75% of everyone are very likely to die, but exactly how very unlikely it is and exactly how to evaluate such costs is the question, because the Bayesian decision theory idea doesn’t just truncate the distribution, it integrates all the way out into the tail, out to 250 meters of sea level rise or whatever the max physically possible is if you melt every bit of ice into the ocean.

              • Chris Wilson says:

                Brandon, “This post is about a topic I’ve examined and discussed in depth, possibly more than anyone else in the world.”
                Really? OK, moving on… I’m not interested in an argument by authority here, and will not engage further with your intended pissing match.
                Real quick: did you read the Hansen et al. paper I posted? Evidently not, because that is one of the sources for a physically plausible multi-meter SLR by 2100 scenario. Michael Mann discussed that paper favorably, although to my knowledge he hasn’t “endorsed” it in the sense of saying “yes, this seems like the most likely thing to happen”. In short, you do not have to look hard to find leading climate scientists who think the IPCC is overly conservative in their risk assessments.

                Hansen et al:
                https://www.atmos-chem-phys.net/16/3761/2016/acp-16-3761-2016.html

                For those interested, mainstream consensus risk assessments are plenty concerning already, even ignoring scenario’s like Hansens:
                https://www.noaa.gov/news/new-federal-climate-assessment-for-us-released

                IPCC Special report on 1.5C of warming:
                https://www.ipcc.ch/sr15/chapter/chapter-3/

              • Anonymous, I know my comment already addressed your question, but to be clear, Michael Mann did not win a nobel prize. He claimed he did, multiple times, but that was false. What actually happened is the IPCC won a Nobel prize, and Mann took credit for winning because he was an author for an IPCC report. So no, I don’t think Mann stealing credit for the IPCC’s Nobel prize means he should be trusted over the IPCC :P

                Daniel, it’s fine to talk about the long tails, but that discussion should be based in some sort of science or research. Your talk of extinction or the elimination of 6 billion people due to five meters of sea level rise in 50 years (or anything similarly extreme) are not. When the probability of an event drops to 0, the curve is truncated.

                Chris, I didn’t make an argument by authority. I cited my personal knowledge/experience as a reason I find it weird to have people suggest I don’t know things, but that’s not an argument by authority. I didn’t ask anyone to trust my opinions because of my authority. Strangely enough, you’ve appealed

                As for the paper you cite, I’ve openly discussed how a handful of people have spread a meme the IPCC is overly conservative on this issue. I’ve also discussed how the IPCC, ostensibly representing the mainstream view, rejects this claim, as recently as last year, half a decade or more after the claim was originally made. Claiming I must not have read this one paper is no better than your previous claim that I don’t know what I’m talking about. It’s based on nothing more than you rejecting mainstream views to cherry-pick alarmist drivel which is in no way mainstream.

                Side note, Michael Mann committed fraud and became famous for a study whose results depended entirely upon giving a tiny subset of his data from a single region so much weight it was the sole source of his supposed hemispheric temperature reconstruction. And he’s even admitted his study was dependent upon a tiny subset of his data, so it’s not even an issue in dispute. Anybody who cites him as a leading climate scientist who should be treated with credibility does a disservice to the global warming movement.

              • the point where the cure drops to zero is the point where it’s logically impossible to exceed. that would be where we melt every bit of ice and then heat the ocean uniformly to 99C perhaps.

                everything short of that needs some nonzero probability. it’s fine to say it’s very small, but we need to actually have quantitative models and then believe there is a real reason why that quantitative model is right in some sense. the problem is, we don’t have that, we have dismissive comments such as those you made here

              • Daniel, what you’re doing right now is demanding I prove a negative. You’re saying, “You can’t prove X won’t happen so we have to consider it!” That’s logically fallacious for a host of reasons, but most notably, we can rule out lots of things beyond what you claim we can rule out. There is not a non-zero probability the heat of the ocean will uniformly reach 80C, much less 99C Even if one didn’t realize such a thing is physically impossible,* the idea we have to account for such extreme scenarios is ludicrous. All human life would have ended long before then.

                Demanding people account for the outlandish, alarmist scenarios (which lack any scientific foundation) you’ve described on this post can only sabotage efforts to combat global warming. These sort of exaggerations are a major reason so many people dismiss global warming advocacy. This is not how science is or should be done.

                *I can’t stress enough how impossible this idea is. There are at least a hundred different reasons this is physically impossible

              • I hate double posting, but I really cannot stress enough how crazy the idea we might need to consider the possibility of oceans reaching 99C is. Leaving aside everything else, what possible reason could there be to stop considering scenarios at 99C? What is so special about 99C that it matters while 100C does not? Nothing. 99C was chosen completely arbitrarily. Any other extremely high value could have been chosen just the same.

              • Chris Wilson says:

                Brandon, I have no idea what “meme” you are referring to. The Hansen paper was published in 2016, so clearly that’s not what you mean to say the IPCC refuted a decade later. Your argument rests on you having authoritative access to the climate science community, in particular greater than Michael Mann, James Hansen, and numerous others. Let me be blunt: where is your scholarly corpus in climate science? Do you have peer reviewed pubs? Conference proceedings? I see a journalistic book you published claiming to hew a middle ground between skeptics and ‘warmists’. Are you a journalist? If so, there is no dispute here – you are free to cover the issue as you see fit, and I am free to disagree and continue to represent what I see as a better assessment of scientific colleagues work.

              • 99C was chosen because 100C is the boiling point of water.

                You don’t seem to understand how Bayesian probability works.

                I’m not disputing that the idea that reaching 98C is ridiculously unlikely. I’m just saying we can’t say it’s 0 we have to have a *specific numerical quantity* assigned to all the sea level rises between -inf and inf. Some of those values can be actually equal to 0 but only if we know *with logical certainty* that it’s impossible. For example, with logical certainty we know that the temperature can’t reach less than 0 kelvin basically by definition of temperature.

                The problem is then simply assign p(x) a non-zero function for all sea level rises between say -20m drop and 100m rise or whatever the *mathematical logical* limits are.

                Sure, you can assign 1e-105 to sea level rise of 250m in the next 50 years… but you can’t assign 0, and you have to assign specific numbers between 0 and say 10m and they *can not be zero* because logically it’s possible for sea level to rise 10m in the next 50 years.

                So now you simply need to describe *in a way that is convincing to others* what are good reasonable believable probabilities for all possibilities, and what are the costs associated with those outcomes.

                It’s not even *that* hard, it’s just that you’ve denied even any kind of understanding of what probability is for ro how it works.

                When two functions are multiplied together Badness(x) * probability(x), if Badness(x) grows extremely quickly, then probability(x) has to *decline even faster* and you can’t just wave your hand and say probability(x) = 0

              • To put it another way, we can both agree it’s fine to truncate the probability distribution at p(x) associated with bringing the entire ocean to the boiling point, because we both know that the probability we should assign there is easily so small as to be basically 0. Like I’d assign total probability to reach a mean ocean temperature greater than 45C at 1e-300… fine.

                between 4 and 5 meters of sea rise in the next 50 years? No such luck. You can make a very specific quantitative argument about why we should assign some particular p(x) function to sea level rise in the range 0 to 25 meters, but you can’t not convince me that p(x)=0 above 5m of rise is a good model. You need to convince me that you have a model, and that it’s meaningful, and that badness(x)p(x) declines fast enough for x > 4m sea level rise that it doesn’t matter even though badness(x) is probably something we should assign like 1M dollars per person alive today resulting in 7e15 dollars, so if your probability is anything greater than 1e-18 we are going to have to keep it in the equation.

              • Put another way, in an order of magnitude way we’re talking about the calculation being sensitive to probabilities above 10^-20 or so, and that’s something like around 1 over the number of stars in the observable universe.

                https://www.space.com/26078-how-many-stars-are-there.html

                So we really need to consider ridiculously small probabilities, such as “if we needed to push a button and destroy one star in the universe, and we chose it at random, how sure would we be it wasn’t our own star?”

                The calculation for sea level rise in the range of “greater than 4 m of rise” is sensitive to the specific numbers we assign to specific rises even though they’re ASTRONOMICALLY small probabilities, because any reasonable calculation of how bad those outcomes could be RISES astronomically in that range.

                you can’t just truncate it to zero.

              • Chris, you say, you have no idea what I am referring to as, “The Hansen paper was published in 2016, so clearly that’s not what you mean to say the IPCC refuted a decade later.” This shows for all your claims of me not knowing what scientists say/not having read papers, you are in fact that one who is ignorant of basic facts of the situation.

                That 2016 paper was not the first paper in this discussion. It is a continuation of an argument which had been going on for years. A quick Google search shows James Hansen was saying this at least as far back as 2006, 13 years ago, where he said:

                “We have,” Hansen says, “at most ten years — not ten years to decide upon action, but ten years to alter fundamentally the trajectory of global greenhouse emissions” — if we are to prevent such disastrous outcomes from becoming inevitable

                There are more topical quotes in that link, but I thought it worth sharing this one since back in 2006 Hansen said we had until only 2016 to take drastic action or disastrous results would become inevitable. We didn’t take such action. It’s now 2019. According to Hansen of 2006, we’re already screwed. It’s interesting to me his 2016 paper doesn’t revisit these claims.

                Regardless, you’ll find in that link the same claims about the IPCC being too conservative in its projections of sea level rise as you’ll find in the 2016 paper. That you’re only aware of a 2016 paper, not any of the many publications Hansen released making similar claims from at least 2006 on doesn’t make the claims new. All it means is you’re unaware of basic information regarding the topic you’re discussing.

                For the record, I don’t claim to have special knowledge or access like you portray. I’ve simply paid attention to the global warming debate. That means I learned lots of things, including the fact publicity hounds like Hansen and Mann often make claims that go far beyond what climate scientists in general accept. Incidentally, I’ll note you haven’t addressed the question you were asked about whether or not it’s true Mann falsely claimed to have won the Nobel prize. I’d say doing things like that is an indicator a scientist may not be trustworthy.

              • Daniel Lakeland, 100C is the boiling point of pure water with a certain of pressure being applied by the atmosphere. Oceans are not made of pure water. I imagine we all know adding salt to water increases its boiling point. However, there are many other factors. Atmospheric pressure changes based on things like water content and air temperature, both of which would change dramatically if ocean temperatures increased by 50+ degrees.

                You picked 99C as your limit because 99C is the boiling point of water under a very different set of circumstances than the ones you were considering. That’s rather arbtirary, but even if incorrectly assumed the oceans would boil at 100C, it’s still a completely arbitrary limit. There’s no reason 99C oceans would be a scenario we’d have to consider but boiling oceans isn’t. You can keep saying things like:

                It’s not even *that* hard, it’s just that you’ve denied even any kind of understanding of what probability is for ro how it works.

                But the reality is you’re just wrong on science. Constantly. With these errors you keep claiming things are possible even though they’re not. FOr instance, you now claim it is possible to have 250m of sea level rise in the next 50 years. That is not possible. It is phsyically impossible. That I know this is physically impossible does not mean I deny “any kind of understanding what probability is for ro how it works.”

                I understand I may not be able to convince you certain things are physically impossible. It’s not my responsibility though. You’re the one advancing a form of analysis. You have to justify the assumptions which go into it. If your lack of basic scientific knowledge leads you to assume things are possible when they clearly are not, nobody will take your analysis seriously because it will be so obviously wrong.

                If you want to respond to people pointing out your lack of basic scientific understanding by saying they just don’t understand how probability works, you can. It’s a terrible idea though. It’s rude, intellectually dishonest and quite frankly, a waste of everyone’s time.

                P.S. It’s basic physics you can’t have oceans uniformly reach 99C as uniform temperature is impossible in oceans given the effect pressure differentials have on temperatures. 250m of sea level rise in the next 50 years is impossible because there’s only ~80m worth of ice that could melt, and thermostatic expansion can’t produce another 170m. Science proves these things are impossible.

              • Anoneuoid says:

                Brandon Shollenberger wrote:

                250m of sea level rise in the next 50 years is impossible because there’s only ~80m worth of ice that could melt

                Can you explain why the chart on this page seems to show ~250 m higher sea level ~100 million years ago?
                https://en.wikipedia.org/wiki/Past_sea_level

                I agree with the pressure stuff though. It seems to me that very little consideration is given to the role of air pressure.

                Eg, all tropospheres studied begin at ~100 mBar pressure, and than pressure increases at as you go down towards the surface. Then the temperature increases at about the same rate as a function of pressure on all planets:
                https://mappingignorance.org/fx/media/2014/01/fig2_ngeo2020-f1-640×448.jpg
                https://arxiv.org/abs/1312.6859

                But maybe both temperature and pressure are the wrong quantities to consider since they are both secondary phenomena… Instead we probably want something like density and energy per molecule.

              • Anoneuoid, this is not my wheelhouse so don’t expect anything too in-depth, but sea levels of the Jurassic period are believed to have been caused (primarily?) by matters of plate tectonics. There’s a phenomenon known as sea floor spreading. Basically, underwater volcanoes release magma which hardens as it cools and spreads out. This has many effects, one of which is to cause sea levels to rise (at least temporarily?), potentially to large extents. To quote Wikipedia:

                Sea levels began to rise during the Jurassic, probably caused by an increase in seafloor spreading. The formation of new crust beneath the surface displaced ocean waters by as much as 200 m (656 ft) above today’s sea level, flooding coastal areas.

                Why that happened in that period but doesn’t happen now isn’t something I can explain. It’s not something that could be caused by or confused with human activity so I’ve never paid much attention to it. I’m sure there are plenty of resources online that’d explain it better than I ever could. Short version, when you look at periods of millions of years, geological changes can be shocking.

              • Anoneuoid says:

                That page also has this interesting (but unsourced) claim:

                The climate of the Cretaceous is less certain and more widely disputed. Probably, higher levels of carbon dioxide in the atmosphere are thought to have almost eliminated the north-south temperature gradient: temperatures were about the same across the planet, and about 10°C higher than today.

                Venus is also like this, the latitude gradient is much less than on earth. Even though a day is nearly a year long, it is about the same temperature on the night side as day side.

              • It’s like we can’t seem to talk about the same topic…

                Of course you’re right that there are all kinds of physics that make it completely completely ridiculous to get to 99C or 100C or 115C or whatever uniformly in the ocean. I am literally *not proposing that* i’m just saying, until you’re out in the “truly obviously impossible range” you can’t assign actually p(x) = 0 because p(x)=0 is like a black hole from which no information can return… and in the ranges we actually need to consider, such as sea level rise of 5m or 6m or 11m which occurs *VASTLY BEFORE* this “truly impossible range” you are already assigning p(x)=0 and *that’s wrong*.

                Probability is about providing a measure of plausibility over the entire set of *logically* possible things. Logically it’s possible for the sun to Nova and the earth to be wiped out… so even truncating at 99C is a big logical truncation. But it’s one I’m happily giving you. You can easily argue for earlier truncation than that, but you dont’ really have to, because probability is a continuous measure of plausibility, and so you can assign really really small numbers, and they’ll have the same practical effect p(x) = 1e-1030 for all scenarios involving sea level rise more than 40m for example. It’s not zero, but it still doesn’t matter in the calculation.

                But what you *absolutely can not do* is truncate the distribution at “the biggest thing anyone on the IPCC committee said” or anything even close to that.

                So we can easily both agree that everything out past the utterly ridiculous notion that in the next 50 years sea levels will rise more than it possibly could even if you somehow melted all the ice on the planet could be p(x) = 0.

                so all you need to do is provide p(x) for sea level rise between say -60m and +60m, as a continuous strictly positive function, and an argument for why that function is the right one.

                And while you and I are both going to agree that p(x=55m) is still going to be extraordinarily small 10^-250 or something… where we aren’t going to agree *until you provide an argument* is on the value of p(x) for x in the range say 3 to 15m. I don’t see any argument anywhere about why 5m or 5.5m or 6m or 11m is “so impossible that its probability has to be below 1e-40” or something like that, and you need an argument for why we are going to assign probability *that small* before you get away with truncating it out, because the costs rise so high that even values of 1e-20 or so could easily still be important perturbations to the calculation.

                “No one of the few hundred people on the IPCC committee mentioned anything like that” just doesn’t cut it as an argument.

                The problem you seem to have is one that is *well known* in probability theory, which is people can’t deal with astronomically small numbers, and so they tend to just consider them as equal to 0… but when you have A * B and A is astronomically large, and B is astronomically small the result can be anything from astronomically small to normal sized to astronomically large depending on *the relative size of each*.

                This is also a well known problem in Civil Engineering (where I have my PhD). Civil Engineers don’t bother calculating what will happen when the Golden Gate bridge has 3 cars and 4 pedestrians on it like at midnight on a quiet travel night, they calculate what happens when a protest march packs every square foot of the bridge with an overweight person… Sure it’s not likely to happen very often, but the point is the *cost of collapse under that scenario* is so high that it’s the only calculation that matters.

                The point here that seems to be completely missed, and it might be my fault for not explaining it well, is that *you need a probability distribution and an argument for why it’s a good distribution, for a range that goes *well beyond* anything that we think is actually conceivable and assigns a non-zero tail, because the shortcut of “it’s too small to matter” is *wrong*

              • A thing you don’t seem to understand, is that I’m throwing these numbers like 99C and 250m of rise out there because they’re so far out there that *we all know they’re not going to happen* and yet you’re spending your time arguing with me about why they can’t happen. I KNOW they can’t happen, you KNOW they can’t happen, that’s *exactly* what makes them a good endpoint for the interval… no argument is needed that it’s ok to say p(x)=0 for x>60

                where you need to start arguing is what is the shape of the curve between p(x) on the range x=[0,60] meters of rise (even better we should include some sea level fall due to whatever, massive snowfall or whatever… but for the moment if I could just get you to understand that the science needs to provide a strictly positive p(x) on this range [0,60]m we’d be doing well.

              • Brandon, to illustrate how this works out numerically and help you understand what I’m talking about.

                Suppose that you get some IPCC people to assign p(x) for sea level rise at 50 yrs as normal(1,1), so that all the experts are talking about what they think is likely to happen is something like 0 or 1 or maybe something like 3 meters of rise.

                Now suppose we assign a cost function like

                C(x) = 1e16/(1+exp(-(x-5)/.5))

                Now this is a function that is relatively small for values below about 3, but rises rapidly to 1e16 by about 6 meters of sea level rise. Remember these numbers are just for illustration purposes.

                Now, if you multiply C(x)*p(x), you’ll find the function takes on a peak value at 3 or so, and looks something like a bell curve where about half the area is in the region above 3.

                Now, you could claim that “everyone on the IPCC thinks that it’s going to be less than 3m of sea level rise so we’ll assign 0 probability to anything more than 3.3” but that would make your calculation ignore about 50% of the actual integral from the *proper* calculation.

                And that’s just what happens with a normal tail… Using a normal tail is rarely justified in risk assessments for extreme events, because it represents a rapid impossibility of values outside of a few standard deviations. Something like a t distribution with 5 degrees of freedom and the same median dt((x-1),5) when multiplied by our cost function results in a calculation where the peak is about 5 and nontrivial values that matter for the calculation are found out at 9 or 10 meters.

                So, this is why talking about what people think is *likely* to happen, is irrelevant to the calculation of what we should do, because the decision theory calculation is dominated by the extreme costs and low probabilities of what happens in the tail of the distribution, just like the civil engineers calculate how big the load could be on a once-in-a-lifetime protest or evacuation event when everyone in San Francisco tries to walk across the Golden Gate bridge.

              • Anonymous says:

                “I’m throwing these numbers like 99C and 250m of rise out there because they’re so far out there that *we all know they’re not going to happen*”

                Thanks for this clarification! I was beginning to wonder what was wrong with you people.

              • Daniel, you say you’re “throwing these numbers like 99C and 250m of rise out there because they’re so far out there that *we all know they’re not going to happen,” yet you explicitly stated we need to consider the possibility they will happen. I don’t know how you reconcile that.

                For instance, you said we “can assign 1e-105 to sea level rise of 250m in the next 50 years… but you can’t assign 0.” If you’re claiming you knew 250m of sea level rise in the next 50 years is impossible, then that was a lie. Faulint me over the fact I discuss why scenarios are impossible when you explicitly state we cannot label those scenarios as impossible is incredibly obnoxious.

                I don’t know how you reconcile your statements, and quite frankly, I don’t care. You said we cannot label certain things impossible even though they clearly are. I pointed out they are clearly impossible. If you’re now acknowledging the scenarios are impossible as I said, great. I don’t know how you conclude I don’t understand things while doing so though. If you’re saying those scenarios are not impossible, you’re nuts. That’s all there is to it.

              • Anonymous, except Daniel explicitly stated 250m of sea level rise in the next 50 years is possible. I pointed out that scenario is physically impossible, and he disagreed, saying you can’t assign a probability of 0% to it. So he’s claiming these scenarios are possible, just very unlikely.

              • Brandon, this is why I say you don’t seem to understand how probability works. it’s obviously a fact that assigning 1e-100 or similar means we don’t need to consider 99c and 250m of sea level rise, this is the way that’s done, not to truncate the distribution at the largest value anyone on IPCC thought was likely, maybe 3.5m of rise. assigning such small density *is* ignoring those values. truncating the distribution to 3.5m is instead making a major calculation error.

                the point of moving the truncation point out to the ultra insane region, is it keeps you from accidentally truncating out the merely insane, since it’s the merely insane region that actually matters, just like doing the calculation for 1 person per square foot on the bridge is clearly insane but that means truncating the calculation at the ultra insane such as 2 people per square foot and each of them is a full grown marine with a heavy backpack walking over the top of a series of cars that were abandoned on the bridge… if we truncate the load distribution at merely say a traffic jam with one person per car… we are guaranteed to be underestimating the tail. then one day a tsunami wipes out Fukushima power plant, and the only thing people can say is, we considered tsunamis up to 5m because that was the highest any expert mentioned, noone could have imagined 15m tsunamis… well NO, you made a major error.

                it doesn’t seem productive at this point to discuss much further, because your response indicates a lack of background in the mathematics necessary for probabilistic risk assessment methodology, since only someone without the necessary quantitative understanding would argue about assigning 1e-100 or less to a problem where the maximum costs were on the order of 1e18 or 1e20 as being different in an important way from assigning zero. I’ll also point out that is perfectly logically possible to achieve much worse results than 60m sea level rise. we’ve had asteroid impacts that mass extincted the dinosaurs, a supervolcano sits under Yellowstone, etc etc the point being that 1+1=3 has probability 0, everything else that can’t be proven false by pure mathematics has some nonzero probability so we need to get comfortable with arguing about orders of magnitude instead of only working with 0 or things bigger than around 0.00001

              • In any case, I think I’m done with my end of this conversation, I’ll just leave this here with a summary:

                probabilistic assessment of existential risks is an extremely difficult problem because it involves such amazingly high costs that the calculation is sensitive to amazingly low probabilities, and so we need to have a good reason to believe in the probabilities we’re assigning well out into the range of ridiculously small probabilities such as 1e-10 or 1e-20 or the like. These *aren’t* frequencies under repeated sampling, they’re logical arguments, and we don’t really have that, there’s no real convincing basis for assigning some particular curve that decays in some particular way. Uncertainty *in the model itself* is large enough that we don’t have a good basis for relying on these kinds of calculations.

                The end result is that some kind of conservativeness in assigning something like a maximum entropy distribution which keeps the probability density as high as possible under some constraints is probably our best bet, and then we need to discuss how to constrain it… But it’s possible to get some ridiculously long tailed distributions that would leave us considering things that are very very extreme, like 20 or 40m of sea level rise as still part of the expected loss calculation. So again, we wind up needing to have a meta-argument about what the principles are we’re going to use to assign the tail of a distribution. And we just don’t even need to consider what’s going on in the core of the distribution, like whether we’ll get 2 or 2.2 or 3.1 meters of sea level rise because *they don’t matter for the calculation* because the costs grow so fast out beyond that.

                This is PhD level research area, and Civil Engineers have probably spent more effort on this kind of thing than most other areas of study as they’ve been modeling extreme events since I think the mid 1800’s. Unfortunately they tend to be focused on frequency models, whereas here we need a Bayesian model for a particular *single* event, not repeated sampling from “random” events.

              • Chris Wilson says:

                Daniel,
                Martin Weitzman appears to have done a decent job covering this territory here:
                https://scholar.harvard.edu/weitzman/publications/modeling-and-interpreting-economics-catastrophic-climate-change

                His Conclusion section is spot-on IMO. Basically, he thoroughly dunks on the kind of approach that Tol and others like him take to this subject, from a risk point of view.

                Brandon, Terry, etc. I don’t think grade-B muckraking stories about Michael Mann contribute anything of substance to this discussion. He may or may not have his vanities, but all the co-authors were *jointly* awarded Nobel Peace Prize in 2007. In general, calling him outside of the mainstream of climate science community is fatuous:
                http://www.met.psu.edu/people/mem45

                The real investigative journalists are at this point focused on the story of how corporations, think tanks, and political PACs have managed to derail the discussion for so long, and manufacture controversy where none exists.

              • Daniel, what you’re saying now is nonsense and directly contradicts what you said before. You say, “it’s obviously a fact that assigning 1e-100 or similar means we don’t need to consider 99c and 250m of sea level rise,” but before you said “So we really need to consider ridiculously small probabilities.” Assigning a ridiculous small probability to a scenario is not ignoring it, and as you said before, we need to consider scenarios even if htey have a ridiculous small probability. The only time we would not need to consider a scenario is if the probability for it were 0.

                You can make all the derisive remarks you want about me, but the simple reality is you’re demanding people assign non-zero probabilities to scenarios which we know with certainty to have 0 probability. That’s nonsense. And despite your repeated derisive remarks, it is not in any way how mathematicis or statistics works. ON top of this, you’re resorting to bizarre red herrings like:

                I’ll also point out that is perfectly logically possible to achieve much worse results than 60m sea level rise. we’ve had asteroid impacts that mass extincted the dinosaurs, a supervolcano sits under Yellowstone, etc etc

                We’re talking about an analysis of the risks of climate change as caused/influenced by human activity. The possibility an asteroid could hit Earth has nothing to do with it. You don’t do a risk analysis of greenhouse gas emissions by saying, “An asteroid could hit Earth any time!”

                You’re welcome to stop participating in this exchange. I suspect it’d be better for everyone if the exchange ended. But to be clear, this exchange happened because I pointed to scenarios you described and explained they were physically impossible. You disagreed. If you felt the scenarios were not impossible but for some reason should still be assigned a non-zero probability, that is something you never mentioned until just now. Prior to your latest comments, you repeatedly indicated you felt these scenarios were actually possible. They’re not.

              • Chris, thanks so much for that reference, Weitzman gets it absolutely clearly correct, hits the nail right on the head.

              • Chris, Michael Mann and his co-authors were not awarded the Nobel Peace Prize. The IPCC, as an organization, was awarded the prize. People who volunteer to work as authors or reviewers were not. I’ll quote the IPCC on this:

                The prize was awarded to the IPCC as an organisation, and not to any individual involved with the IPCC. Thus it is incorrect to refer to any IPCC official, or scientist who worked on IPCC reports, as a Nobel laureate or Nobel Prize winner. It would be correct to describe a scientist who was involved with AR4 or earlier IPCC reports in this way: ‘X contributed to the reports of the IPCC, which was awarded the Nobel Peace Prize in 2007.’

                Your claim Mann and his co-authors were *jointly* awarded the prize is without basis and directly contradicted by the IPCC. This is fitting as thus far I’ve consistently cited the IPCC while you contradicted it, saying people like Mann should be trusted as correct in place of the IPCC.

                Given that position, it is certainly relevant Mann committed scientific fraud in his work by hiding test results which invalidated his conclusions, became famous for a hemispheric temperature reconstruction that cherry-picked its results from a single area not representative of hte hemisphere and lied about having won a Nobel Prize. These things directly speak toward Mann’s credibility, or lack thereof.

                To be clear, I’ve said the IPCC represents the views of mainstream climate science in regard to sea level rise. You’ve disagreed because a group of people have been saying the IPCC is too conservative on sea level rise for ~12 years, and you somehow think they should be trusted over the IPCC as reflecting what climate science shows. You’ve yet to explain why the IPCC has refused to adopt their views after more than a decade if they truly represent what climate scientists believe.

              • Later in his paper he seems to allow the “badness” of increasing temperatures increase exponentially without bound, basically choosing “conventional” models for utility/consumption. This seems implausible. If you raise temperatures without bound, eventually you kill all the people and you can’t get more cost by raising temperatures more. But it doesn’t really take away from his main point: the way things are being done today is wrong, and the expected utility calculations are very sensitive to the behavior of the tails, because the badness function increases to a very very large number. We need to do calculations that include *deep* models for tails, out to things like 10,20,30,40 m of sea level rise, and we need a model for those things, and *we don’t have it* and the consequence of not having it is increased uncertainty in the *form of the model* and that just spreads things out to a bigger extent, leading to fatter tails in the model….

              • > the simple reality is you’re demanding people assign non-zero probabilities to scenarios which we know with certainty to have 0 probability

                You can *only* assign exactly 0 probability to events that you can deduce are impossible from the axioms of set theory. This is the mathematical definition of 0 probability in Cox’s Bayesian treatment for example.

                If you’d like to inform the rest of the world the precise level of sea level rise that can be ruled out by pure mathematics I’m sure we’d all like to know.

                remember when the particle physics world saw what they thought could maybe be faster than light neutrinos (OPERA experiment in 2011)? I’m sure they spent exactly *zero* time and effort on that, because they deduced from the axioms of set theory that faster than light neutrinos were impossible and so there was 0 probability they could occur. Right?

                No, wait, that didn’t happen. In fact, although most physicists assigned tremendously low but nonzero probability to FTL neutrinos, the knowledge that would be gained if they did exist was *so* valuable, that instead they spent a bunch of time trying to come up with an explanation for how those data came about… eventually discovering clock and cable interconnect issues that explained it.

                if they could deduce from set theory that FTL neutrinos were impossible, they’d spend *zero* time on it, just like they’d spend zero time on someone claiming to prove using physics that 1+1=3

                So, if you can’t prove it from the axioms of set theory, you can’t assign 0 probability to it, and we have to assign some nonzero probability. It will turn out that there is no specific number for sea level rise beyond which we can assign 0 probability, but we can truncate our calculation once the probability drops low enough, because there is a maximum size of the cost of extinction and so any probability density has to eventually decrease enough to keep the integral of the density over the whole real line equal to 1 and so eventually the product of a bounded thing times the decreasing density goes to close enough to zero as makes no difference.

                You can take offense at my assertion you don’t seem to understand probability theory, but the fact that you don’t automatically understand the above argument indicates that you don’t. The fact that you declare semantic victories over other people who use informal language such as “impossible” when they mean only “so incredibly improbable as to be negligible in the calculation” indicates a desire to dunk on other people on the internet rather than actually debate, which is why I’m done here. This post is entirely for the benefit of posterity so that people reading this thread can understand why saying a thing is informally “impossible” is perfectly compatible with also saying that it doesn’t actually have p(x)=0 but merely p(x) extraordinarily small, and that’s what the logic of probability calculus actually demands: nonzero p(x) for all values that can’t be ruled out by purely formal calculations deducible from the axioms of set theory (aka pure math). As shown by the FTL neutrino experiment, even the basic laws of physics such as we know them have some nonzero probability of being wrong.

                A thing is informally “impossible” when its probability is so low that it doesn’t enter into the decision calculation by more than a roundoff error. Since the decision calculation of interest involves very big costs, no amount of hand-waving and saying “more than x=3.5 meters of sea level rise is truly logically impossible and can be assigned p(x)=0 exactly” or whatever will do the job.

              • One other clarification, because it’s important to the understanding of decision theory and someone else might read it some day:

                When doing a decision calculation, you should include in your probability of the outcome all known ways in which the outcome occurs. So if your calculation is sensitive to extraordinarily small probability down in the 1e-20 or lower range, then that will include things like world ending asteroids and massive solar flares and runaway climate feedback loops and violations of the speed of light and whatever.

                Next will come a model for *how those probabilities change with each possible action you can take*. You may find that no action you can take will change the asteroid risk… but you may find that there is a feasible actions such as funding NASA to the tune of just a few billion dollars to chase asteroids can reduce that asteroid risk… In fact, you may find that the most cost effective way to reduce world-ending risk is not to try to cut CO2 emissions in half or whatever because that has tremendous cost… but rather give a few billion to NASA to hunt asteroids because it’s low cost and huge expected benefit, or maybe just impose a large tax on living closer than 5m to the current coastline…

                So, no, when it comes to making decisions about climate change and other potentially world-ending risks, you *do* have to include all the ways in which the world could end, and then make a decision about which actions give you the best bang for the buck in reducing those mechanisms of risk. Artificially limiting your decision analysis to just “climate change related risks” is basically a way to boondoggle money by eliminating from consideration other important and more cost effective actions that could be taken.

              • Daniel, if you want to say you’re going to stop participating in a conversation then wait until several days have passed without further response from the person you’re talking to then suddenly comment to say they’re wrong, you can. It seems strange to me though. Did you honestly expect me to check to see if you responded again, after saying you wouldn’t, days after I last responded?

                That seems unlikely. The most likely outcome was I’d see you say you’re not going to respond, see you not respond for a couple days and conclude the conversation was over. Then I’d leave. Then you’d get to slip in a couple last comments to get the last word and get to pretend I ran away without addressing what you said. Given you made such a fuss about argument styles in your latest comments, I think it’s safe to label your approach hear as… shall we say, shady?

                In any event, you said you were done, and I was content to let the conversation end. So I will. I am not going to engage in a conversation filled with behavior so clearly designed not to lead to productive discourse. You can have the last word if you want.

              • John R. says:

                Daniel,

                Although you made good points and I generally appreciate your posts, at times the tack you have taken in this thread borders on trolling. I wanted to cheer on your dunking from the silent sidelines but I think it went a bit awry. You seem to have let your frustration get in the way of explaining your points as clearly as you could have (though I think I understood them after some thought).

                All:

                Overall I am disappointed in how this discussion turned out.

                If it is so difficult for a small peanut gallery to have an Enlightenment-style (e.g., mutually converging on something correct or reasonable) discussion on topics related to existential risk, where does this leave me, a 24 year old statistics student? I want to cling to the hope that I can do something toward the public interest in spite of the tendency that we (myself included) have toward irritability, out-of-hand dismissal, and being susceptible to perverse incentives. I’m sure the discussants here have done plenty toward the public interest (whether intending to or not in various capacities), but seeing such blatant talking past each other nonetheless is disheartening.

              • Well, I guess that’s disappointing. I certainly am not trolling here at all. It’s a very simple concept, when considering risk analysis you need to let a continuous well thought out model for p(x) control the region of interest, not a choice of a hard truncation limit.

                so, if for computing purposes you have to truncate something (which sometimes you do), truncate it *far beyond* anything that could ever be of interest in your calculation. For example, floating point numbers truncate at something like 10^300 and that is rarely a problem, because we rarely need numbers that big in applied problems. Imagine if the people who thought up IEEE floating point made the largest representable float be 100000?

                How can you find out what “far beyond” means? You need to consider ultra-ridiculous scenarios. This is actually easy to do but people’s brains recoil at it. Want to truncate your model for global warming? calculate what would happen if the entire power output of the sun that hits the earth were captured and retained completely without re-radiation for the next 100 years for example. It’s clear that this isn’t remotely going to happen and so if you truncate here you won’t affect the calculation.

                You might well argue that you could truncate your calculation well before that… maybe, but it just isn’t necessary because probability will handle that for you… The *hard part* is figuring out what the shape of the tail looks like. And that’s my point, we *don’t have that model*. But one thing we *know* is that we can’t just truncate it at “the biggest thing anyone on IPCC could get published in the report” or anything *anywhere close* to that. It’s like if someone asked you to calculate the net present value of your future income and you truncated the calculation at the change you carry in your pocket and today’s balance on your checking account.

                I don’t know how to put this more clearly, p(x) is a model we need, and p(x) truncated to x < 3 meters is *not a good model* it's a *terrible* model, whereas p(x) truncated to x less than the maximum level you could raise the ocean to if you melted every bit of ice on the planet and then heated the ocean until it hit the boiling point is a perfectly fine model because we know *for sure* that this truncation point is not going to affect the calculation. No one is going to argue with you that you should put the truncation point out to where the ocean would be if we imported asteroids made of ice or something… the truncation point is no longer an issue, now we can argue about the right thing, which is the shape of p(x).

                But for political reasons, people *want* to short-circuit that argument by pulling the truncation point in to something like x = 3m or x=4m or 6m or whatever. By doing that, you can drop the expected damages by many orders of magnitude compared to models that have a continuous shape with no truncation. The truncation point shouldn't affect or be a part of the model, it's only a device for calculational efficiency at most.

            • Anoneuoid says:

              The next layer is to specify a distribution over outcomes that are even remotely plausible, which in this case includes total human extinction by 2100 at one edge, and modest damage at the other.

              This is not the entire range of plausible events, warming could also be a net benefit.

              And many more things can wipe out civilization before flooding from warming (which has supposedly been happening for 25k years, by the way):

              1) Asteroid strike
              2) EMP from solar flare, etc
              3) Sun goes micronova for a few seconds and starts huge fires all over the surface (there is actually evidence this happened in the last ~30k years from the Apollo missions)
              4) Warming -> Increased precipitation over the poles -> growing ice caps -> raised albedo -> “ice age” triggered
              5) Nuclear/etc war
              6) Massive epidemic (infecting either humans or some crucial aspect of the food supply). I actually expect one of these is coming up in the case of measles (there’s tens of millions of vaccinated adults who are no longer immune, and this unprecedented situation is growing every day)
              7) A deep grand solar minimum -> Cooling
              8) The earth’s crust detaches from the underlying liquid mantle and tilts on its axis.
              9) Economic/political collapse
              10) Massive supervolcano eruption

              These are just some things I have seen people are concerned about for one reason or another. And it is interesting to look into the “prepper” community, who are not waiting for governments to do something. Few of them are so worried about global warming in particular, but they will be much more prepared none the less. Also, it is strange to me that most of the people who share your level of concern about that one threat continue to live in (or are even moving to!) coastal cities instead of acting on their beliefs…

              • Anoneuoid says:

                typo:
                > “there’s tens of millions of vaccinated adults in the US alone who are no longer immune, and this unprecedented situation is growing every day”

                And looking at that list of stuff I have seen people worried about… I am almost laughing at how a 5 meter sea level rise over the course of decades is considered “extreme” and is supposed to lead to human extinction. That is a relatively tame threat in comparison. You can say it is more probable perhaps, but how are you going to assign a probability to half of those low frequency threats?

              • I think you have to also consider the decision theory issues of how much your actions can alter the outcome. If you can’t alter the outcome, then there’s basically nothing to do about the threat. Massive supervolcanos and micro-novas and things are stuff we aren’t going to have a lot of control over the outcome.

                I’m much more concerned about the issue where something like 90% of the fish life that was present in 1900 is gone today, and large numbers of land animals are going extinct as well.. than I am about sea level rise per-se.

                If this rise is gradual on the level of say even 1 meter per decade I suspect people will succeed in simply migrating inland. Migration to more polar regions is inevitable if temps rise. It is disruptive, but I certainly am not concerned about human extinction on the basis of sea level rise and equatorial land use changes… desertifying the Amazon rainforest and killing off taiga/boreal forests would be disastrous for O2 supply for example, that’d be a serious problem much more than sea level rise itself.

              • I don’t understand the idea sea level rise could be “gradual on the level of say even 1 meter per decade.” That would be what, two centimeters a week? That’s insane. The ecological impacts of that would be beyond measure. The idea this is somehow a lower bound baffles me. A rate like that is probably greater than the upper bound of the most doomsday scenarios any climate scientists have offered. Mainstream projections for sea level rise are minuscule in comparison.

                As for the desertification of the Amazon and killing off of forests, that would… not be disastrous for the planet’s oxygen supply. The planet’s oxygen supply is predominantly supplied by ocean based plant life. Climate change reducing forest area would have a negligible impact on oxygen levels in any direct manner. If one wanted to worry about the oxygen supply, it’d make way more sense to look at how changing ocean temperatures would affect life in the ocean as that’s where oxygen is primarily produced.

              • Terry says:

                Daniel,

                Agree, the loss of fish life sounds very scary. And it has actually been happening for a hundred years now.

              • re 1m per decade, Sorry that’s not a lower bound that’s an upper bound on what we’d expect (ie worst case predictions are around say 5m in the next 50 yrs), and it’s not 2cm per week its 0.2cm/week (gnu units is your friend, one of my favorite tools !)

                Sure that’s ecological disaster for many wetland areas etc… but in terms of say destroying livable coastal cities, it’s easy to imagine people moving inland fast enough that a city the size of Los Angeles won’t all be wiped out simply due to sea level rise before we’ve had a chance to move much of the economic value to another location. that’s all I’m saying.

                As for the oxygen production, I’ve read something like 20+% of worlds oxygen is produced in the amazon, I’m not sure if that’s bad info or not, but if you slash oxygen production by 20% I can’t imagine that’s a good thing, even say 50% of the oxygen is produced in the oceans and is unchanged.

              • Terry says:

                Not sure why low birthrates are not on the list. A permanently low birthrate is a mathematical certainty that a group will eventually become extinct.

              • Anoneuoid says:

                Massive supervolcanos and micro-novas and things are stuff we aren’t going to have a lot of control over the outcome.

                It isn’t like you can stop it, but you can be better prepared to increase the chance of survival. By having supplies, and skills, for example. Or for those particular threats, be on the other side of the planet. But most things you could do personally (and governments could be doing collectively) would be helpful in the face of many different threats. I’d generally pick that as a priority than cherry picking one of them.

                I’m much more concerned about the issue where something like 90% of the fish life that was present in 1900 is gone today, and large numbers of land animals are going extinct as well.. than I am about sea level rise per-se.

                I remember reading a paper years ago where they got the historical extinction rates and found it really sketchy. Can’t find it at the moment… do you have a source that convinced you this was not something that just happens cyclically or worse is an artifact of how they count species or something like that?

              • Anoneuoid says:

                Not sure why low birthrates are not on the list. A permanently low birthrate is a mathematical certainty that a group will eventually become extinct.

                What reason do you have to think birthrates are becoming permanently low? I haven’t heard of anyone prepping for this scenario…

              • Terry says:

                Anoneoud:

                “What reason do you have to think birthrates are becoming permanently low? I haven’t heard of anyone prepping for this scenario”

                Note I said for a “group”, not for all humans. Birth rates have fallen below replacement (or are going to) for most developed countries. This has been the trend for a long time, and I haven’t seen any evidence of a reversal anywhere. It might reverse sometime, I really don’t know. I could be wrong.

                My point is just that this is a real possibility for some groups. It has already happened to the Shakers.

                On the other hand, there will probably be some subset that is able to maintain high birth rates. A number of insular religious groups seem to be doing so. So, the human race as a whole may avoid extinction, but extinction is a real possibility for many subgroups.

                I would guess that extinction of the whole human race is more likely than an asteroid strike (triggered by societal collapse perhaps), and that extinction of large subgroups is many orders of magnitude more likely.

                Disclaimer: this is just my take on things. I am not an extinction or population expert.

              • Anoneuoid says:

                Anoneoud:

                “What reason do you have to think birthrates are becoming permanently low? I haven’t heard of anyone prepping for this scenario”

                Note I said for a “group”, not for all humans. Birth rates have fallen below replacement (or are going to) for most developed countries. This has been the trend for a long time, and I haven’t seen any evidence of a reversal anywhere. It might reverse sometime, I really don’t know. I could be wrong.

                My point is just that this is a real possibility for some groups. It has already happened to the Shakers.

                On the other hand, there will probably be some subset that is able to maintain high birth rates. A number of insular religious groups seem to be doing so. So, the human race as a whole may avoid extinction, but extinction is a real possibility for many subgroups.

                I would guess that extinction of the whole human race is more likely than an asteroid strike (triggered by societal collapse perhaps), and that extinction of large subgroups is many orders of magnitude more likely.

                Disclaimer: this is just my take on things. I am not an extinction or population expert.

                I just don’t see this as any type of threat I (we) should be defending against. Eg, when I hear about the last member of a tribe that spoke the language passing, it is disappointing because information about independent cultures is really interesting. However, I don’t get any sense of something threatening about the situation.

              • Terry says:

                Ananeouid,

                You might be right. A lot of people seem to be very concerned that warming might drive humans to extinction or greatly reduce their numbers. So extinction or near-extinction seems to be a biggie if it is due to some things, but no biggie if it is due to others.

                Perhaps it is actually true that “Thanos did nothing wrong.” https://www.reddit.com/r/thanosdidnothingwrong/

              • Daniel, worst case predicitons are not “around say 5m in the next 50 yrs.” The most extreme scenario given in IPCC reports is less than two meters by 2100. That’s maybe 35% of your value, 30 years later, under the most extreme of scenarios. Even if one thinks this extreme scenario is too conservative, I struggle to believe anyone could claim it is conservative by ~500%.

                As for oxygen, there may be an issue of terminology. Namely, net vs. gross. Rainforests produce more oxygen than most types of forests, but they also consume much more. The result is gross oxygen production of the Amazon is about what you remember hearing, but the net production is almost nil. If one somehow eliminated the plant life of the Amazon rainforest but kept all other life from it, that would have a significant problem for oxygen levels.

            • Terry says:

              “Yes, but we do not need accurate forecasting to justify a full klaxon emergency mobilization. We just need precautionary principle suitable for the nature of risk involved. The next layer is to specify a distribution over outcomes that are even remotely plausible”

              Makes sense. To really drive this point home, you should apply this thinking to driving an automobile (where there is a remote possibility you might die in an accident) and show how it successfully predicts people’s decisions about whether to drive automobiles.

              • Chris Wilson says:

                Terry, no no no. Individual risk != civilization-level systemic risk. You have got to separate these out. The society-wide death rate from automobiles, given contemporary design and reasonable regulation, is a sharply bounded quantity. The plausible impact from climate change is not.
                This is so important to get right. Don’t conflate these classes of risk!

              • Terry says:

                Chris,

                So, I take it your model does not accurately predict an individual’s decision to drive a car? It would greatly help your cause if you could show that it accurately predicts something that we can observe.

              • “Chris’s” model (ie. Bayesian Decision Theory) isn’t a model of how people *do* make decisions, it’s a model of how people *should* make decisions.

                The assumption that “what people do is the right thing to do” and that we should just come up with a model that predicts correctly what people actually do, and apply it to other problems is extremely flawed.

              • Terry says:

                Daniel,

                If the model doesn’t reflect how people actually make decisions, how do you know it accurately tells us how they should make decisions?

                Looking at it from another angle, are you telling us that people shouldn’t drive cars even though they do?

              • >are you telling us that people shouldn’t drive cars even though they do?

                “people” is not who make decisions about whether they are going to drive cars. individuals make decisions. There are plenty of people who choose not to drive. On my kids soccer team there’s a mother who won’t drive on the freeway. Many elderly people decide not to drive at night… some elderly people are encouraged to stop driving by their family members, or even are reported as a danger and have their driver’s license revoked …

                If a person who is camping is attacked by a rabid raccoon, should they drive to the hospital? If a person is bored, should they get in their car and drive around in traffic randomly for fun?

                What do they get from driving, vs what risks do they take and what are the costs associated with those risks… these are the considerations needed.

                A more interesting question is: what assumptions about a typical middle class person’s life are required to conclude they should drive, vs they shouldn’t?

              • Terry says:

                gg all

            • Terry says:

              Chris,

              Brandon said above: “Michael Mann did not win a nobel prize. He claimed he did, multiple times, but that was false.”

              Is this true Chris? If its not, you should speak up because it would severely diminish Brandon’s credibility. If its true, does it raise any concerns in your mind about Mann’s credibility?

              • Terry, if you’d like, I can provide clear proof of what I said. It’s been widely discussed, especially as Michael Mann repeated his false claim of having won a Nobel Prize in legal filings for lawsuits. In the meantime, it’s worth pointing the Nobel Prize in question wasn’t for scientific acumen. The IPCC was awarded the Nobel *Peace* Prize. That’s the same award Barack Obama was given early in his presidency, which he was nominated for only two weeks after he took office.

                It’s a prestigious award, but even if Mann had been awarded it, it wouldn’t be because he demonstrated some great skill in his profession.

              • Terry says:

                “if you’d like, I can provide clear proof of what I said [about Mann’s Nobel Prize]”

                Not necessary. If you are lying, Chris will quickly call you on it.

        • Terry says:

          5m? I had heard that the IPCC estimated sea level rise of about 12″ in 100 years due to AGW. I think I even saw it in one of their reports. In an earlier post I saw someone say the IPCC was lowballing the estimate.

          • Chris Wilson says:

            Read the report I linked. It is a newer model from Jim Hansen’s group. It is not ‘validated’, but it is currently still plausible to the best of my knowledge. Risk management cannot wait (see above).

      • Buster Friendly says:

        A perfect example of a “maladaptive contrarian prior.”

        • Anoneuoid says:

          Yes, this is what I thought you meant earlier. You mean people who come to a different conclusion than you must be using “maladaptive” reasoning.

          Why couldn’t I just say the same about you? You have a “maladaptive” credulous prior. I don’t mean to really say that (not that I think it is necessarily wrong, but because it is a pointless discussion) but perhaps you can get the point.

          As I said, I recommend everyone prepare for natural disasters as far as is convenient for their place in life. Also, that governments should be devoting significant resources to this task too instead of all the other BS they waste money on.

          Focusing on one threat scenario where there isn’t even any kind of reasonable plan anyway… bad priorities. First, they should do general obvious stuff we even knew about in biblical times like store up a couple years of food/fuel supplies for each citizen, etc. I mean, I think people waiting for everyone else to do something about these problems you are worried about is delusional.

    • Dalton says:

      One meter of sea level rise will be disastrous. Those in this comment reply thread who seem to take the attitude “It’ll be fine, we’ll just move” can afford to be blase about climate change, because it’s clear that they don’t live on this planet. Our economies and political systems are freaky fragile things. Consider that 10 years ago, the entire world economy staggered (but did not quite collapse) because of something as simple as several thousand overpriced homes. Consider that Western democracies have become destabilized in part due to very small scale migrations of people from equatorial climes.

      The planet will be fine. Even if we lose one million species, the planet, the biosphere will be fine. Human beings will also be “fine” (I think we’d be pretty hard to kill as a species, we’re simply too tenacious and adaptive). But civilization, advanced economies, these are fragile things. We’ll be too busy fighting each other to mount an organized effort to gather “all that uncontaminated freshwater melt”. And how exactly does such a massive organized effort occur at the same time we’re focusing on decentralizing and building survival bunkers? The thing all the survivalists seem to forget is that there are a lot of other humans around. The ecosystems of the world will simply not support the current human civilization if we begin to lose the agricultural and industrial efficiencies we are currently (unsustainably) exploiting. How many years until all the wild fish and game are gone after the collapse of industrial agriculture? How many losses due to wildfire and floods and at what rate can the global (or simply national) housing market contend with? Consider Paradise, California: it is not as simple as “let’s just rebuild.” The water system is destroyed (contaminated with benzene) and there are real limitation in construction manpower and materials. That town will not recover for decades from an event that happened in an afternoon. And you propose to replace the centuries of development in New York, Miami, etc (let alone Dhaka!) in a few years?

      So we’ll just move to Antartica? What food will we grow in barren soils? What insects will pollinate it? Perhaps you can lend some resources from the fantasy world you are currently inhabiting?

      • I think you misread these comments. the point of the comments here was it was the kind of problems you mention that will be problematic, not the fact of a few meters of water rise per se. it’s not the fire in the crowded theater, it’s the trampling, and you can get trampling with other problems too, like a sudden announcement of a bomb threat… but these problems can occur in many other scenarios as well which few people are examining. like the supervolcano or the asteroid impact. sure a few people at NASA discuss asteroids, but tens of thousands of researchers are loading up their grant hunting guns on climate change.

        I’m in favor of studying climate change, but I think it’s far too politicized to really get good studies, hence Richard Tol, basically the wansink of climate econ maybe?

        • Dalton says:

          “I’m in favor of studying climate change, but I think it’s far too politicized to really get good studies”

          What evidence do you have to support this?

          Are you suggesting that climate scientists are falsifying data for politic gain? Or are you suggesting that their work is being suppressed?

          It seems to me that you’re suggesting that we can’t trust any science that is politicized. Therefore the best way to suppress inconvenient facts is to politicize them.

          The creation of NASA was a political project. We only went to the moon because a politician decided we should. Are you saying that we could have done a better job of getting to the moon if it wasn’t politicized?

          That is simply a ridiculous stance. We should judge science on it merits irrespective of whether the topic is politicized: does the theory fit the observation. Do the predictions hold up?

          • Anoneuoid says:

            does the theory fit the observation. Do the predictions hold up?

            These aren’t the right questions, this is a Bayesian stats blog.

            How do the predictions compare to those derived from alternative theories in terms of accuracy and precision? I don’t think there is much precision at all in the climate models, as far as I know there isn’t even really one model used to explain climate change. They take the average of what is predicted by an ensemble of models with mutually contradictory assumptions.

            There is nothing wrong with ensemble learning in general, but if you do that don’t act like the predictions are derived from some kind of theory. It is just a form of machine learning and the performance should be judged in that context (overfitting the training data is rampant, much better than zero predictive skill comes easily, information leaks from model -> test data and vice versa greatly exaggerate the skill, etc).

            • Dalton says:

              These are physical models based off first principle understandings of the climate system. They are certainly based on theory. This is not simply plugging the numbers in and letting the machine work it out. Are you suggesting we instead model an Earth with alternative physics? Because that is not how climate models work. The assumptions of ensemble models are not “mutually contradictory.” The different assumptions are different scenarios for how much carbon dioxide humans pump into the atmosphere. The basic physics (like the absorption spectra of carbon dioxide) do not change and are well understood. And yes there are ensemble model runs, but those are specifically done in an attempt to quantify the uncertainty by introducing stochasticity.

              That is my understanding as *Bayesian* statistician. Climate science is not my field. But I have an average level of information about the state of climate science from reading things like the Fourth National Climate Assessment: https://nca2018.globalchange.gov/

              The evidence for climate change, as I understand it, not being an expert in the field, but as an educated and informed citizen is multifaceted and overwhelming. Physical evidence from multiple sources: basic physics (the aforementioned absorption spectra), ice cores, multidecadal weather observations, etc, etc, etc. There is an insane amount of sources out there, a mountain of literature. Yes, the sheer volume of this literature is influenced by the politicization of the topic. But that is because climate scientists are (by and large) engaged in a good faith effort to answer an inexhaustible well of good faith skepticism and bad faith denialism. I am not sure to which camp you belong.

              But if you ask me, again as *Bayesian*, what is the more likely: a) that the “truth” is being suppressed because of the politicization of climate science and that the voluminuous evidence is distorted and biased because of some nefarious groupthink which destroys the careers of good faith skeptics or b) most climate scientists are operating in good faith and that a 97% consensus of climate scientists exists as a result of each individual receiving requisite training in physics, chemistry, geology, metereology etc. and after reading some significant portion of the evidence; I’m going to put much more prior weight on b.

              As far as why aren’t we preparing for other natural disaster, well for one, we are. NASA just ran a meteor strike simulation exercise. But many of those other natural disasters are either highly unlikely (new ice age, earth’s crust detaching (??)) and/or unpredictable (like a supervolcano) or not really scientific problems (nuclear war is a political problem). Climate change is different because 1) it is already happening, will get worse no matter what we do, may get catastrophically worse if we do nothing or if we double down on fossil fuels, 2) is something we can do something about if we find the political will (don’t blame individuals, less than a dozen corporations are responsible for the vast majority of emissions), 3) is very much permanent on the scale of human civilizations – we could recover from a meteor strike or a supervolcano in a few generations because these things will not radically alter the climate – carbon dioxide will be in the atmosphere for millenia, 4) makes some of the other things on your list (epidemics, political/economic collapse) and somethings not on your list (like megahurricanes and megafloods) more likely.

              I simply cannot fathom why an intelligent person could read something like the Fourth National Climate Assessment and think that the whole climate change thing is bogus or simply not a big deal. And yes, I am at the point where I am incapable of being dispassionate about this to those who deny established science in a way that impacts the well-being of myself and my children whether they be anti-vaxxers or climate denialists. If you want to quibble about the specifics of a particular model, or critique it’s precision, fine. I’ll engage in a good faith debate on that. But if you simply want to sit there and adopt a blanket contrarianism that is actively detrimental to progress on addressing an *honest-to-god civilization ending threat*, well, as I said earlier, I’m sick of being polite.

              • Anoneuoid says:

                The different assumptions are different scenarios for how much carbon dioxide humans pump into the atmosphere.

                That is my understanding as *Bayesian* statistician. Climate science is not my field. But I have an average level of information about the state of climate science from reading things like the Fourth National Climate Assessment: https://nca2018.globalchange.gov/

                Can you point out where it shares details of the models their conclusions are based on? Basically, I want to know exactly what is being assumed here.

              • Anoneuoid says:

                Like here is an example of the level of detail I am looking for:

                CLM3.0 contains several improvements to biogeophysical parameterizations to correct deficiencies in the coupled model climate using CLM2.1. In CLM2.1, the 2-m temperature frequently dropped below the atmospheric potential temperature during daytime heating in certain regions. Stability terms were added to the formulation for 2-m air temperature to correct this. In CLM2.1, there is a discontinuity in the equation that relates the bulk density of newly fallen snow to atmospheric temperature. The equation was modified to correct this problem. Aerodynamic resistance for heat/moisture transfer from ground to canopy does not vary with the density of the canopy in CLM2.1. This leads to high surface soil temperatures in regions with sparse canopies. A new formulation was implemented in CLM3.0 that provides for variable aerodynamic resistance with canopy density. The vertical distribution of lake layers was modified to allow for more accurate computation of ground heat flux. A fix was implemented for negative round-off level soil ice caused by sublimation. Competition between plant functional types (PFTs) for water, in which all PFTs share a single soil column, is the default mode of operation in CLM3.0. CLM2.1 accepts either rain or snow from the atmospheric model. If the precipitation is snow, a formulation based on atmospheric temperature determines the fraction of precipitation that is in liquid form. In CLM3.0, the atmospheric model (in cam and ccsm mode) delivers precipitation explicitly in liquid and/or solid form. In offline mode (uncoupled from an atmospheric model), the formulation based on atmospheric temperature is still used. A fix was implemented to correct roughness lengths for non-vegetated areas.

                http://www.cesm.ucar.edu/models/ccsm3.0/clm3/

                I’d like to see something like that but a table of assumptions for all the models and how they differ from each other. As you can see, even from version to version of the same model you are getting contradictory assumptions. Like this is the tyype of stuff I am talking about:

                Aerodynamic resistance for heat/moisture transfer from ground to canopy does not vary with the density of the canopy in CLM2.1….A new formulation was implemented in CLM3.0 that provides for variable aerodynamic resistance with canopy density.

                So in one model the aerodynamic resistance is assumed constant, in another it is not. Direct contradiction. Anyone who has done any sort of complicated modelling knows there are going to be tons of these choices and tradeoffs to be made.

                And I am really not trying to say these models need to be perfect to be useful or interesting… but people tend to drastically underestimate the degree to which you can tune all this stuff to get the results you want. That is why I think this should be treated more like machine learning and there really is no theory making precise predictions for us to test, if the predictions are too off they have a million things to blame.

              • Dalton says:

                Can’t reply to you comment below. You want Volume 1, chapter 4 of the 4NCA for details on the climate models:

                https://science2017.globalchange.gov/chapter/4/

                You’ll likely have to dig through the references for further details.

                Responding to your other comment below:

                “So in one model the aerodynamic resistance is assumed constant, in another it is not. Direct contradiction. Anyone who has done any sort of complicated modelling knows there are going to be tons of these choices and tradeoffs to be made.”

                That’s not a contradiction. That is an improvement, a relaxing of assumptions towards a more general model. In fact that’s very much in the spirit (if not the method) of the Bayesian workflow as described by Betancourt, Andrew, etc. Start simple, and expand when you find deficiencies. I use intermediate models all the time, simply because I don’t yet have to tools or data to fully model the data generating process I’m interested in. That doesn’t mean my intermediate models are wrong or contradictory. In fact they may capture some summary statistics quite well, but not others.

              • Anoneuoid: that’s exactly the kind of stuff that makes me think it’s a waste of time to use global circulation models (GCMs) to make predictions about the future. They offer you *precise* but sensitively dependent predictions about *non-existent* worlds that have *some sort of similarities and many differences* from the real world.

                Too many knobs and buttons is a bad thing here. Working on small degree of freedom spatially averaged models is likely to provide more reliable predictions and could be subject to reasonable Bayesian fitting to help understand the uncertainties involved. Something like a physically motivated global or broad-averaged model with less than 20-30 parameters would be a good thing to investigate. Anyone know if there are several such models developed by anyone? I really don’t know. I’ve only seen people talking about these intensely detailed GCMs because that’s where the fancy papers and grants seem to be.

              • Martha (Smith) says:

                Dalton said,
                “The evidence for climate change, as I understand it, not being an expert in the field, but as an educated and informed citizen is multifaceted and overwhelming. Physical evidence from multiple sources: basic physics (the aforementioned absorption spectra), ice cores, multidecadal weather observations, etc, etc, etc. There is an insane amount of sources out there, a mountain of literature. Yes, the sheer volume of this literature is influenced by the politicization of the topic. But that is because climate scientists are (by and large) engaged in a good faith effort to answer an inexhaustible well of good faith skepticism and bad faith denialism. I am not sure to which camp you belong.”

                I agree. Research on climate change seems to be more highly scrutinized than research in any other area — partly because researchers on the topic come from a variety of areas of science, and so need to give convincing evidence and reasoning that will pass the scrutiny of scientists with different backgrounds.

              • Anoneuoid says:

                Can’t reply to you comment below. You want Volume 1, chapter 4 of the 4NCA for details on the climate models:

                https://science2017.globalchange.gov/chapter/4/

                You’ll likely have to dig through the references for further details.

                I know what “dig through the references” entails, so no. Not willing to do that for this topic. I hoped you may have cared enough about it to have satisfied your own doubts by searching out that info.

                Responding to your other comment below:

                “So in one model the aerodynamic resistance is assumed constant, in another it is not. Direct contradiction. Anyone who has done any sort of complicated modelling knows there are going to be tons of these choices and tradeoffs to be made.”

                That’s not a contradiction. That is an improvement, a relaxing of assumptions towards a more general model. In fact that’s very much in the spirit (if not the method) of the Bayesian workflow as described by Betancourt, Andrew, etc. Start simple, and expand when you find deficiencies. I use intermediate models all the time, simply because I don’t yet have to tools or data to fully model the data generating process I’m interested in. That doesn’t mean my intermediate models are wrong or contradictory. In fact they may capture some summary statistics quite well, but not others.

                This seems to miss my meaning.

                1) This is a direct contradiction: “So in one model the aerodynamic resistance is assumed constant, in another it is not. Direct contradiction.”

                2) That was just an example of the level of detail on assumptions I am looking for. I don’t care about modelv1 vs modelv2 differences, I care about model A vs model B differences. I am certain similar contradictions will be found there.

                3) My point is that (afaict) there is no actual consensus model of climate change that anyone can point to. Instead there is an ensemble of them derived from some similar and some contradicting assumptions. All these hundreds or thousands of assumptions are “tunable” and alter the output.

                So when people talk about “climate change” I do not know what they mean. Of course the climate always changes, and it at first seems like they mean something specific about CO2 doing something, but no it is very vague. It seems to go straight from simple 1 dimensional Stefan-Boltzmann models to an array of GCMs with hundreds or thousands of parameters to tune.

                The first is worse than worthless in my opinion because it makes no sense (one example is it assumes the earth has no atmosphere, but still has albedo due to clouds). The second looks more like machine learning to me. Nothing wrong with that, I love machine learning and have used it for many tasks. But that approach comes with a whole other slew of problems I haven’t seen any indication the people concluding stuff from these results are concerned about or even familiar with.

              • Terry says:

                Martha (Smith),

                “I agree. Research on climate change seems to be more highly scrutinized than research in any other area — partly because researchers on the topic come from a variety of areas of science, and so need to give convincing evidence and reasoning that will pass the scrutiny of scientists with different backgrounds.”

                +1

                Also, after reading this blog for a few years, it seems foolish to just believe anything we are told.

          • Evidence that it’s too highly politicized are that people like Richard Tol manage to publish meta analyses of complex economic evaluations of massive global dynamical systems in which his own previous predictions completely drive the conclusions, even though this fact has been pointed out for years which is the point of this blog post right?

            • Dalton says:

              “that makes me think it’s a waste of time to use global circulation models (GCMs) to make predictions about the future”

              Well, we should just shut down the National Hurricane Center then. Waste of time.

              • The national hurricane center predicts a few days out. a GCM predicts a couple hundred years. I suggest you take a model that successfully predicts better than a coinflip how much the S&P index will go up and down tomorrow, and use it to predict the level of the S&P 500 10 years out… Figure out how well it works fit to the entire decade of the 1980’s at predicting the stock market in 2000… let me know.

              • Another useful test would be to take a GCM today, and based on todays weather, have it predict the entirety of the 2019 atlantic hurricane season. Run it on as many realizations as you can. Make solid predictions we can test by the end of the year as to where damage will occur, how much, etc etc.

                Now compare this to simply a statistical regression model fit to the last 10 years of hurricane damage data using a relatively simple Bayesian model.

                let’s see which one is more accurate at predicting the spatial distribution and extent of damages… How much did all that dynamical systems stuff improve the predictions?

                My bet will be on the stats model doing a better job.

        • Chris Wilson says:

          Daniel, here’s where I disagree slightly. Compared to asteroid impact: a) we are actively complicit in climate change and b) can therefore do much to mitigate the risk. Conversely, we also know that the risk from climate change grows into unacceptable terriroty with inaction, whereas inaction on asteroid impact doesn’t change much. Although I agree we should be putting a reasonable effort into monitoring and developing plans to attempt to fragment or divert asteroids, or whatever (I would be all for a 2X of NASA’s budget :)).

          • My own position is that we should be doing relatively broad preparation for “bad stuff”. Imagine you think your house might catch fire due to laser beams from space. You could

            1) Put up reflective mirrors over your house

            or

            2) Change your roof to a nonflammable material, with a construction technique that improves roof cooling, and install fire suppression sprinklers and fire alarms.

            doing (1) protects you against one single devastating but highly unlikely scenario. Doing (2) protects you against multiple likely scenarios as well as some highly unlikely ones.

            To the extent that we can do something more broad based, I think that’s a better bet than focusing energy entirely on doing things to specifically try to change the course of climate, like massive investment in solar power or whatever. Suppose we put tons of solar power on the current locations of population density, and then find that we can’t really change climate, and all those solar panels are inundated because they’re on houses near coastal areas. Bad plan.

            Also, as I understand it there are models as plausible as anything else we have (in other words, poor models) in which it’s too late to do anything short of nuking china to wipe out all their thousands of coal power plants they’ve fired up in the last decade. Since that would lead to global destruction in a nuclear war, and we’re unlikely to get china to turn off those power plants voluntarily, we could be in a scenario where we’re going to have to accept whatever climate change there is, and should focus efforts on resilience and alteration in investments and building things in new places etc.

            I think the politicization is all about fighting to “arrest climate change through XYZ policy” etc but it might well be all for naught. Whereas building solar power plants in the Colorado plateau or Nevada Desert or whatever would potentially provide resilient power supply even if say large hurricanes wipe most of the Florida and Mississippi infrastructure off the map… so some things maybe make better sense than others, and I don’t really hear suggestions flying around like that.

            • Martha (Smith) says:

              Good points.

            • Chris Wilson says:

              Again, I think you have good points here, but I don’t fully agree. We have a chance of holding warming to 1.5C (basically a 10 year window right now). That is exponentially better than 2C, which in turn is exponentially better than 2.5C, etc. from a risk management point of view…and from the point of view of wanting to live on a planet that has, say, any coral reef left by 2050. If we burn through these thresholds, we are gambling on a process that may deliver shocks and impacts that *we cannot realistically adapt to*. So extreme mitigation is actually a prerequisite for adaptation. The solar build-out should be everywhere! In fact, in good news, it is already cheaper than new coal *and gas* in most circumstances, and some recent utility-scale moves highlight that it is already cheaper than *maintaining existing coal* and increasingly gas too. That economic calculus needs to be incentivized like crazy.
              We can do this. But the time is perilously short.

              • Physically “we can do this” may be true. Politically “we can do this” seems extremely unlikely. In particular China has brought a billion people out of extreme poverty on the backs of tremendous coal output. How are you going to convince them to shut that all off in the next… 5 years?

                Absolutely we should put solar in lots of places. I suspect however that we should have plans to move it somewhere else when we abandon Florida and Mississippi for example.

      • Anonymous says:

        “Consider that 10 years ago, the entire world economy staggered (but did not quite collapse) because of something as simple as several thousand overpriced homes.”

        Source? There were 116.78 million homes in the US in 2008. Are you asserting that only “several thousand” of those homes were overpriced? (Hint: there were 11 million foreclosures from 2007 to 2011 in the US alone.

        “One meter of sea level rise will be disastrous.”

        Source? Are you as confident about this assertion as you are about the number of overpriced homes in 2008?

        • Dalton says:

          https://nca2018.globalchange.gov/chapter/8/

          Is $1 trillion dollars disastrous? That’s on the order of the decline in housing prices in 2008. (National household net worth dropped by $11 trillion, figure about half that was in housing market, and about third of that was in coastal areas).

          • Terry says:

            So it sounds like we agree that the assertion that “several thousand” homes were underpriced in 2008 was innacurate.

            US GDP is currently $20 trillion. Over the next 50 years, it will therefore be about $1,000 trillion. So $1 trillion is about 1/1000 = 0.1 percent of US GDP over the next 50 years. To put it another way, on a per-year basis, we are talking about roughly $20 trillion / 1000 = $20 billion per year. That is about 0.4 days of US GDP per year lost.

            That’s a lot of money, I agree. But we make ourselves easy targets for climate deniers if we we make wild statements about the magnitudes of the risk.

          • Anoneuoid says:

            America’s trillion-dollar coastal property market and public infrastructure are threatened by the ongoing increase in the frequency, depth, and extent of tidal flooding due to sea level rise, with cascading impacts to the larger economy.

            How about this much simpler task? Try to get the US government to simply stop subsidizing (paying) people to living in flood zones. In that case the cause of the warming/sea-level-rise should be irrelevant. Not even that has been accomplished. On the contrary, the main organization you hope to “do something” is actually encouraging people to construct in and inhabit these areas you believe are threatened.

            So if you really believe all this catastrophic flooding is going to happen, all you can do is stay away from those areas, advise others to do likewise, and plan for the consequences you envision.

          • Anoneuoid says:

            Is $1 trillion dollars disastrous? That’s on the order of the decline in housing prices in 2008.

            Actually, I thought of something.

            Isn’t the answer a matter of perspective/situation?

            If I am looking to buy a house I want prices to drop. Not sure why people who saved their money until an opportune moment to buy should be disfavored over those who spent more than they should have on a house. In fact, if you are worried about climate the savers/conservers should be the ones you want to reward.

            Also, rental rates should drop too meaning the less wealthy have more disposable income, stores have lower expenses (so can lower prices), etc.

            • I was going to say something here about the difference between destroying $X worth of stuff and the price of $Y worth of stuff changing so that it’s now $(Y-X) which “destroys $X worth of value” or some such thing..

              Ultimately we need to talk about the real costs involved in disasters. If there are some number of houses which would individually transact for $1T total, and you burn them to the ground, this is a loss of $1T or even more worth of stuff (more because if you buy each house one at a time and burn it to the ground, slowly the price of houses will increase and each one will cost a little more, so that by the time you’re done you’ve destroyed more than $1T worth of houses…)

              But if you create a new special virtual holodeck house and it’s the cool thing that everyone wants, and therefore the price of the old houses plummets… this doesn’t “destroy value” in a real sense, it simply alters people’s priorities. If for some reason there is suddenly a resurgence in “old fashioned houses” and people decide to move out of holodecks into regular houses again, they’re still there, waiting to be used. Not true if you burn them to the ground.

              So measuring things in dollars is not trivial for various reasons, even just the difference between N * marginal_cost vs the actual integral(marginal_cost, 0, N) is often ignored.

              • Dale Lehman says:

                You are confusing what is called a “pecuniary externality” with a nonpecuniary externality. The former is not relevant to an overall analysis of the efficiency losses due to climate change (although there are distributional effects) but the latter is. The fact that destroyed houses raise the price of those houses that were not destroyed plays no part in an overall cost-benefit analysis, although it impacts the distribution of the costs and benefits. Your example of slowly destroying the houses leading to more than a $1T loss in economic welfare is therefore not correct. But it introduces a different issue – the timing matters, not because house prices will rise, but because of discounting. There are probably other secondary effects such as different relocation costs for massive vs. incremental housing losses.

              • Dale,

                Suppose I drop a bomb on 1000 houses today which are together worth $1B. Do you claim that calling this a “$1B loss” is a correct statement?

                Now suppose I announce today that I will drop a bomb on all these houses tomorrow, but I will gladly pay you $1 for your house today. Suppose all these people take me up on the $1 per house offer, and I pay $1000 for these houses, and then drop a bomb on them tomorrow. Since these houses transacted yesterday for $1000 it’s obviously correct to say that there was a $1000 loss today, whereas yesterday it was a $1B loss.

                Now, suppose I announce today that I will buy all houses in the blast radius for $2M each. Obviously tomorrow when I bomb them, the loss is $2B

                I’m sorry but I don’t buy any of it.

                Suppose tomorrow the government announces it will print $100B worth of $100 dollar bills and hand them over to me in a collection of bundles transported to my house via an armed caravan of 18 wheel delivery trucks. Has the world created $100B worth of value, or burned up thousands of dollars worth of fuel, and wasted security guard and driver time for no real benefit whatsoever.

              • Actually reading your post a little more carefully, I actually think you and I are closer together in meaning than we think.

                My point was that when the “price of houses” dropped due to bankruptcy and soforth, it didn’t destroy the homes yet the dollar amounts plummeted. these weren’t “real” losses, they were “pecuniary externalities” in some sense. put another way, they are zero sum: people who owned homes lost out, and people who wanted to buy homes got to do so at lower prices… overall benefits and costs balance.

                I think you and I agree that these things are not relevant to the question of how big a *real* loss flooding all of Los Angeles would cause.

                Next there’s the question of how to convert something like “we lost all of Los Angeles” into some kind of dollar cost. Would you use the “cost to purchase all the land and structures in Los Angeles from the current owners at the prices they would each individually sell for today in the absence of the knowledge of what was about to happen and then drop a nuke on it” as the cost? Or would you need to do the calculation some other way? Prices represent information. Information asymmetry is strong in this scenario: the person about to drop the nuke knows what’s up… the individual sellers don’t. How can you possibly use market prices in strongly information asymmetric environments and claim that they measure something real? Maybe you can, but I’d want to hear an argument why.

              • Dale Lehman says:

                Daniel
                This is in response to your last comment below (can’t indent any further in the post sequence). Let’s not get bogged down in the detailed examples – we are somewhat closer in our thinking. But you should realize that economists only have two ways to measure value: if you own something, it is the minimum you require to give it up (WTA) (and “minimum” really means that, so it is not so much a subjective statement as it may seem). If you don’t own it, it is the maximum you are willing to pay (WTP) to get it. These two measures sound quite different, but they are actually close – according to economic theory, since it is only the income effect that makes them different and that is generally not that large (a poor person cannot pay much, but is also willing to accept less to give something up).

                Now, if you don’t like the way economics treats values, then I agree completely. I suspect Peter Dorman’s comments below are along these lines. There is no role for a “citizen’s value” and extreme values of of WTA and WTP are normally rejected in economic analyses – ethical principles really are not permitted, except when they are measured by WTA or WTP. I personally see this as a major flaw in economic theory, and one that is relevant to attempts to measure the damage due to climate change.

                So, part of the difficulty with all these estimates is that people need to agree on the ground rules – are we using economists’ measures of value or some other sense of “value?” If we adopt economists’ measures of value, then changes in prices that result from climate change are not relevant to measuring the total social welfare costs since they take place through markets and markets produce efficiency. There are hordes of market imperfections that could be viewed as undermining that statement (monopoly power, behavioral theories of irrational behavior, etc.). But there are as many stories as people. I think more fundamentally, many people would not agree to how economists view value and efficiency (if they had any idea what it meant).

                Aside from all of this, what most people get concerned about is the distribution – so regardless of how much social welfare is lost due to climate change, people will get excited by whether it is the world’s poor or wealthy coastal home owners that suffer. And, economics will place much lower value on the former than on the latter (WTP or WTA is much lower for the world’s poor). So, any attempt to measure “damage due to climate change” is fraught with these issues of distribution and values. What I personally object to is when these ethical issues are swept under the rug as the debate then centers on just how large the welfare losses are (will a 2m rise in ocean levels wipe out X homes, and where are those homes and how can they be protected?).

                I similarly object to many of the statistics debates that center on whether the precise definition is correct (e.g., confidence intervals) while the basic problem is whether uncertainty is recognized, appreciated, and used in decision-making. Not to change the subject, however.

              • yes, now I think we are in agreement. we can’t use market prices because they don’t reflect lots of things. for example what is the discounted willingness to pay of the next generation so they can experience diving on coral reefs… definitely not in the current market price.

                all this is particularly of interest to me as someone with a civil engineering background. how to value the stability of society created by structures that survive natural disasters, or by the preservation of wetlands, or how to decide whether to dam up Yosemite to store fresh water etc

              • Dale Lehman says:

                The only problem is: if we don’t use market prices, then what do we use instead? I’m game, but we could do worse. As (I believe) Churchill said of democracy – it is a crummy system except compared to the alternatives.

              • we could do worse, but I think we could do better too. it’s a counterfactual type issue, estimate a theoretical quantity using existing market data and a counterfactual scenario

  7. Peter Dorman says:

    This thread has wandered over into speculation on the uncertain costs of climate change. For those who may be interested, there will be an appendix in my forthcoming (some day) book on CC that critiques the “social cost of carbon”. It uses a variety of arguments — about tail risk, the untranslatability of many outcomes into monetary impacts, problems of scale (the irrelevance of current marginal valuations), and the intrinsic shortcomings of the utility frame itself. I advocate adopting a ppm target from the deliberations of climate scientists, such as the targets we’ve been given in recent IPCC reports, and then using economics to get there as expeditiously and with as little disruption as possible. There is a passing reference to our friend Richard Tol in a footnote.

    • Dale Lehman says:

      Not really a reply to Peter, but it didn’t fit above and it sort of is related. Some of these comments about potential disasters and the relative magnitudes seems to suffer from flawed logic. If you worried about species extinction – and I am as well – then it is not independent of climate change. Overfishing is a combination of too much fishing AND poor reproduction. Climate change may contribute to both of these. Moving entire cities inland is feasible for rich countries like the US (with attendant costs, though we are wealthy enough to afford it), but in poor countries will exacerbate a number of other environmental disasters. I don’t think these threats can be treated as if they are unrelated. And, modeling the relationships between them is fraught with huge uncertainties.

  8. Terry says:

    In case anyone is interested in some data on sea-level rise (this being a stats blog and all), here is a chart of actual sea-level rise over the past few decades:

    https://en.wikipedia.org/wiki/Sea_level_rise#/media/File:NASA-Satellite-sea-level-rise-observations-1993-Nov-2018.jpg

    It shows a very steady rise of 3.3mm per year. I understand that a chunk of this is natural and not due to CO2 increases (I could be wrong).

    Here is a chart of projections of sea-level rise:

    https://en.wikipedia.org/wiki/Sea_level_rise#/media/File:Sea_Level_Rise.png

    It shows that the current projection is 0.3m to 1.2m by 2100. I think this is the IPCC estimate. The largest sea-level rise shown on the chart is 2.4m by 2100.

    https://en.wikipedia.org/wiki/Sea_level_rise

    • Chris Wilson says:

      Terry, the IPCC is still mostly considering thermal expansion as a mechanism of SLR. Their projections of ice melt are ridiculously low compared to recent findings. They are not including models of exponential ice melt from grounded ice sheets (e.g. Greenland, and WAIS).
      The IPCC is always a “lowest common denominator” approach, which means they are hugely conservative in their estimates. I mentally think of their projections as the plausible *lower end* of what is likely to happen, for that reason…

      • Terry says:

        I’m confused.

        The IPCC is held out as the gold standard of scientific rigor and anyone questioning their conclusions is treated dismissively in the scientific community. You are telling me that it is widespread knowledge that their conclusions are unrealistically low? It would be helpful if you could point me to where in their latest report they say “they are hugely conservative in their estimates”.

        • I do believe that your description of “widespread knowledge that their conclusions are unrealistically low?” is correct when it comes to what you’d get individual climate scientists suggesting these days.

          for example, from 2012:

          https://www.scientificamerican.com/article/climate-science-predictions-prove-too-conservative/

          a more recent example:

          https://www.sciencealert.com/international-climate-change-reports-tend-toward-caution-and-are-dangerously-misleading-says-new-report

          and:

          https://www.eenews.net/stories/1060102283

          What can make it into an IPCC report is basically a question of political will. But many individual scientists are saying things like it’s way too conservative. They can do that because they don’t need to convince a whole committee.

          in general, looking at IPCC type predictions in the past, and then seeing how things turned out… they tend to be at the upper limit of what the IPCC report discussed.

          here’s the typical example from that second link:

          https://www.sciencealert.com/images/2018-08/Screen_Shot_2018-08-21_at_12.11.41_pm.png

          • Martha (Smith) says:

            Interesting. Thanks

          • Chris Wilson says:

            + 1. I have a small advantage in having met (and worked with) a few people who have been part of IPCC reports. At least one of whom is one of the big names. But overall, the things Daniel says are spot-on and basically it’s all in the public record. The IPCC is an effort based on building consensus. You inherently have a bias towards the lowest common denominator in those situations.
            It is the source of a *ton* of really good informaiton – I use a lot of it in my teaching and as premise for research proposals – but on this subject of seriously confronting *the range of plausible enough scenarios that we should be very worried about* they are always a decade or more behind.

            • Terry says:

              Is there somewhere in the IPCC report that indicates their estimates are well-known to be overly conservative? I searched the AR5 for the word “conservative” and found no hits.

              Thousands of scientists are involved in the IPCC. How many have said the IPCC is systematically mistating the science?

              Chris, who is the “big name” you have worked with?

              • Chris Wilson says:

                The IPCC is not systematically mis-stating the science. It is just that the body of work around which consensus can be built is always gonna be behind the curve and also != the range of things that need to be considered for proper risk management in my view. Rather than take my word for anything, here’s one of many instances where Michael Mann discusses this stuff (not the best production value, but hey):
                https://www.youtube.com/watch?v=JSg4KpijU9k&t=291s

                Around the 5:20 mark he uses the adjective “overly conservative” to describe an IPCC conclusion. There are many other such instances. Also, read Jim Hansen’s paper that I posted above! Do it. The IPCC has considered nothing like what he proposes yet in their forecasts for SLR.

              • To be clear, the meme of the IPCC being too conservative about things like sea level rise has been around since at least 2012. The IPCC published its latest report less than a year ago, in 2018. While it may be true “the body of work around which consensus can be built is always gonna be behind the curve,” I would think six years is long enough for things to catch up a bit.

    • Dalton says:

      From the 4NCA:

      “Relative to the year 2000, GMSL is very likely to rise by 0.3–0.6 feet (9–18 cm) by 2030, 0.5–1.2 feet (15–38 cm) by 2050, and 1.0–4.3 feet (30–130 cm) by 2100 (very high confidence in lower bounds; medium confidence in upper bounds for 2030 and 2050; low confidence in upper bounds for 2100). Future pathways have little effect on projected GMSL rise in the first half of the century, but significantly affect projections for the second half of the century (high confidence). Emerging science regarding Antarctic ice sheet stability suggests that, for high emission scenarios, a GMSL rise exceeding 8 feet (2.4 m) by 2100 is physically possible, although the probability of such an extreme outcome cannot currently be assessed. Regardless of pathway, it is extremely likely that GMSL rise will continue beyond 2100 (high confidence).”

      GMSL is Global mean sea level

      https://science2017.globalchange.gov/chapter/12/

  9. I hadn’t noticed this before, but that summary paragraph for that chapter’s findings has a subtle oddity in it. The numerical results given in the paragraph you quoted aren’t present anywhere in the body of the chapter. The differences aren’t large, but it seems strange a headline result of the chapter wouldn’t be present in the chapter. For instance:

    “Relative to the year 2000, GMSL is very likely to rise by 0.3–0.6 feet (9–18 cm) by 2030, 0.5–1.2 feet (15–38 cm) by 2050, and 1.0–4.3 feet (30–130 cm) by 2100”

    That gives lower bounds for 2030 as 9cm, 2050 as 15cm, 2010 as 30cm. Table 12.1 gives lower bounds for 2030 as 9cm, 2050 as 16cm, 2100 as 30cm. These are the same except 2050 is 15cm in one and 16cm in another. The upper bounds in that paragraph aren’t present anywhere. The upper bound for 2030 is given as 18cm, but 18cm isn’t present anywhere in the chapter. The upper bound for 2050 is given as 38cm, but again, that value isn’t present anywhere in the chapter. The upper bound for 2100 is given as 130cm, and 130cm only appears once, in a paragraph which says:

    “Most of these studies are in general agreement that GMSL rise by 2100 is very likely to be between about 25–80 cm (0.8–2.6 feet) under an even lower scenario (RCP2.6), 35–95 cm (1.1–3.1 feet) under a lower scenario (RCP4.5), and 50–130 cm (1.6–4.3 feet) under a higher scenario (RCP8.5)…”

    But while an upper bound here is given as 130cm (other upper bounds are given at different points), the lower bound is given as 25cm, not 30cm as in the summary paragraph. Similarly, if you click on the supporting evidence link for the summary paragraph, the tables there can’t give rise to the cited numbers either (there are too many to cover here).

    I don’t know what to make of that. I just wanted to mention it because it seems really weird.

  10. yyw says:

    Are people seriously using Nobel peace prize as validation of a scientific claim?

    • I’d like to state for the record that I personally have never said anything about the whole Nobel stuff, I don’t give a whit about any of that, it is total irrelevancy, and I’d like to link again to the Weizmann paper from Chris, which really gets the basic concept absolutely correct:

      https://scholar.harvard.edu/files/weitzman/files/modelinginterpretingeconomics.pdf

      In order to calculate something using Bayesian expected utility theory, you absolutely need to ensure that you do not truncate the distribution before it gets small enough that Utility(x)*p(x) tends to zero… Since any real world utility has to be bounded by “everyone on earth suffers a horribly painful prolonged death” and can’t get worse than that, it’s true that eventually you can truncate the distribution, but it’s gotta be in the insanely far tail. Up to that point, you have to have a model for p(x) in the deep tail, and *we don’t have it* but Weizmann shows how that results in us basically needing to discuss and evaluate the exact size of the probability and utility associated with very extreme claims, like in the next 100 years we could melt every kg of ice on the planet.

      This is just a mathematical fact about Bayesian logic, if it can’t be proven by mathematical symbol manipulation without reference to empirical facts, it can’t logically be assigned 0 probability. The alternative to assigning zero probability is to have a specific argument about why p(x) extremely small is correct.

      • Once you’ve got an argument about how extremely small p(x) is and how extremely large Utility(x) gets, you can determine an appropriate truncation point for calculations that results in sufficiently small error. This doesn’t mean you’re assigning p(x) = 0 for bigger values, it just means that you’ve verified that treating it as if zero produces negligible error in the calculation. Truncating sea level rises at a 2 or 3 meters is *nowhere close* to negligible error, in fact it might make the result of your calculation a negligibly small fraction of the proper value.

        As Weitzman says even talking about there being a “proper value” is quite controversial due to structural uncertainty in the kind of model one should use. Before we can do any kind of calculation, we need first to have intense discussions about what a good model looks like and why, and it had better include probability distributions with tails well out into the “melt every kg of ice on the planet” or it isn’t a serious discussion, just like Fukushima power plant should have had risk assessments for tsunamis in the 20, 50 or 100 meter range or they weren’t really serious discussions either (actual heights may have reached 40.5m in Miyako according to wikipedia for example)

    • Chris Wilson says:

      Daniel,
      I’d be very interested in seeing another Bayesian decision approach with some of the different considerations you mention. I don’t have the bandwidth at the moment, but I think this is a really important area that is both poorly developed – and more importantly, the implications for risk, policy, poorly communicated…

      Brandon,
      Congratulations on your semantic victory. Like Daniel and I suspect most others here, I don’t care. You seem to do a lot of muckraking on Michael Mann, which is kinda weird, but oh well.

      yyw,
      No. I am saying, based on my read of literature and conversations with colleagues (I work at University of Florida), that there is on-going uncertainty and debate about what the science says for sea level rise. Brandon Shollenberger is trying to distract everyone from this with some inane attacks on Michael Mann’s character. Be that as it may. Other scientists like Jim Hansen have proposed plausible mechanisms by which we could be facing multi-meter sea level rise before 2100. The IPCC is erring on the conservative side – but I, and I think everyone else reasonable, accept premise that the data do not decisively settle one way or the other as of now. FWIW, other official bodies like NOAA and USACE have much higher predictions than IPCC. Here’s some geoplanning work by colleagues at UF:
      https://sls.geoplan.ufl.edu/beta/viewer/

      • Anoneuoid says:

        As I mention above, the final implication is that general disaster preparedness needs to be improved. This sea-level rise thing is just one possible threat out of many that are all dealt with similarly at a first pass that was discovered in ancient times: decentralization and storing emergency supplies.

        I’d also say if you are waiting for the government to do something specifically about sea level rise you will be disappointed. I mean they are still paying people to move into flood zones so…

        You don’t need fancy equations to figure this out.

        • Anoneuoid says:

          I just played with this tool: https://coast.noaa.gov/digitalcoast/tools/slr

          It only goes up to 3 meters higher than today and that looks like almost a total non-issue compared to the other stuff we could be worried about.

          This Florida-only one looks similar: https://sls.geoplan.ufl.edu/beta/viewer/

          People will just move or adapt by channeling the water somewhere or waterproofing the first few floors. It isn’t like this happens in a day, it is supposed to happen at a rate of like 100 mm/yr. I can’t believe the big deal being made out of this after seeing those maps.

          Yes, store emergency supplies and have a system in place for evacuating and temporarily housing people who need help in a timely fashion. Have some sort of early warning system for floods, etc.

          This is the same as you want for a volcano, asteroid, hurricane, solar flare, approaching army, etc (there is a long list in an earlier post).

          Also, by the time this is supposed to have happened, apparently something like half the threatened buildings will need to be replaced anyway:
          https://www.quora.com/What-is-the-average-life-span-of-modern-buildings

          If people are worried about that they can build the new buildings to cope with the flooding, or just build them elsewhere. This doesn’t require any action beyond whatever the people who own and insure the structures want to do to avoid losing money.

          • Chris Wilson says:

            I disagree. Your assessment is way too glib. The infrastructural costs are enormous, as are the ecological costs (my friends and colleagues are at the front lines of that). I don’t disagree with general preparedness, but I will let our resident civil engineer sort out a discussion of how to manage infrastructural challenges of 3m of sea level rise – especially on top of karst topography.

            Relatedly, you are missing the point about tail risk events. As your base sea level rises, the number of storm surge and flood events that exceed design thresholds starts to blow up. Gradually, your Cat 4 hurricanes strike like a Cat 5.

            I also think you erroneously discount two unfortunate social-political realities: 1) what happens when re-insurers fully pull out of coastal markets, and through political fecklesness we cannot sustain the national Flood Insurance Program? We are talking some serious domino effects there. 2) There is to my knowledge no good legal precedent for municipalities giving up or abandoning land/services. Just ask Detroit. No local coastal government wants to address these problems fully or transparently. Why? Because they are afraid of losing investment and population (=tax revenue). This will happen anyway as re-building costs rise, alongside insurance rates. The vicious trap is that you need way *more* money to do all this adaptation – you can’t both be losing investment/population and adapting.

            IF the status quo continues, a great many coastal municipalities are going to be insolvent by 2050.

            • Anoneuoid says:

              Relatedly, you are missing the point about tail risk events. As your base sea level rises, the number of storm surge and flood events that exceed design thresholds starts to blow up. Gradually, your Cat 4 hurricanes strike like a Cat 5.

              Not a big deal (relative to other threats).

              No local coastal government wants to address these problems fully or transparently.

              No government at any level is going to do anything. As I said, people are still getting paid to move to flood plains.

              IF the status quo continues, a great many coastal municipalities are going to be insolvent by 2050.

              Once again, something you should assume is going to happen anyway. It is well known that governments at all levels of the US are either going to have to declare bankruptcy, get bailed out in the coming decades:
              https://www.zerohedge.com/news/2017-10-05/these-two-charts-depict-which-cities-will-file-bankruptcy-next

              The alternative is hyperinflation will wipe out the debts, or there is some form of debt jubilee. In which case everyone’s wealth is getting drastically revalued.

              So you are only mentioning things I would prepare for anyway. This sea level rise is nothing special, prepare for much worse and you will cover 99% of that too.

          • I think the bigger issue than where is the mean sea level is about what is the frequency of severe weather.

            For example near Pasadena CA we regularly get 115F weather in the summer. It’s unbearably hot but it’s only a few weeks and everyone has AC. Now suppose we start getting 135F for 3 weeks? During those few weeks every plant in the county might die. Those are temps found only in Death Valley currently.

            Over the last 5 years or so the state of california lost something like 1/3 of all living trees. Suppose a die off like that happens every 10 years? Soon there are no trees taller than about 10 feet. You could even have *pleasant wonderful weather* for 11 out of 12 months, and then intense desert like conditions in mid July to Mid august… and California might become nearly uninhabitable.

            Similarly, what if instead of a Hurricane Andrew or Sandy type storm once every 10-15 years, you get 3 of them per year? The result will be that for a hundred miles from the coast, you can’t effectively build anything even though mean sea level only rose say 3m

            Extreme events wiping wide swaths of things out on a routine basis, every year or two will do major major damage even if 99% of the year it’s actually much nicer to live in. Look at a place like Haiti or Puerto Rico, and consider how relatively frequent hurricanes and earthquakes have contributed to their state of economic development (among other issues no doubt).

      • Chris, I said Michael Mann did not get awarded a Nobel Prize. You chose to contradict me, explicitly saying he and his co-authors were jointly awarded a Nobel Prize, a claim which was untrue.q I find it awkward your response to me pointing out that claim is untrue is to congratulate me on winning a “semantic victory.” I would think the difference between winning and not winning a Nobel Prize would be something more than just semantics, and if the issue were one just of semantics, I would think it’d be something you wouldn’t have chosen to argue about.

        As for muckraking, you repeatedly focused on Michael Mann, telling people we should listen to him over the iPCC to find out what climate scientists believe. I said we should rely on the work of hundreds of scientists, not one Mann (pardon the pun). You said no, focusing on that one Mann. Since you didn’t explain why we should trust Mann over the IPCC, I took the liberty of showing why we shouldn’t. The fact I disputed the credibility of the person you focused on, citing as more reliable than the IPCC, should not surprise you. It should not seem weird to you when people point out the peculiarity of you citing an egotistical fraud as more reliable than the reports created by hundreds of scientists.

        The truth is, I don’t think either of these issues is one which should have been argued about on this post. But you chose to argue points. You shouldn’t act surprised when people argue back. You certainly shouldn’t paint people as dishonest for doing so, like you have here.

        On the topic of science, Hansen has not “proposed plausible mechanisms by which we could be facing multi-meter sea level rise before 2100.” He proposed an idea which there is no basis for and did some modeling of what that idea would lead to. There’s no actual evidence his idea is reasonable or realistic. That’s why it isn’t accepted by the climate science community as a whole and isn’t treated as a serious possibility in sea level projections. If climate scientists thought his idea was plausible, it’d have been adopted into mainstream projections. That hasn’t happened. It hasn’t happened because despite a decade of Hansen and a handful of others claiming ocean circulation could shut down because of global warming, climate scientists don’t think that is plausible at all. After a decade of Hansen making an argument and having mainstream climate science reject it, I’d say the argument isn’t plausible.

      • Terry says:

        Chris,

        Thanks for continuing the discussion about the Nobel. You have provided very useful information.

Leave a Reply