Skip to content

Taleb’s Precautionary Principle: Should we be scared of GMOs?

Skyler Johnson writes:

I was wondering if you could (or had already) weigh(ed) in on Nassim Taleb’s Precautionary Principle as it applies to GMOs?

I’ve attached his working paper with Rupert Read, Raphael Douady, Joseph Norman and,Yaneer Bar-Yam. It can also be found at his site,

See also his response to a critique from a biologist.

A search for ‘Taleb’ on your site brought up reviews of his books, but I found no mention of the Precautionary Principle.

My reply: I don’t agree with everything Taleb writes but I’m generally sympathetic to his perspective.

I liked this bit from Taleb’s response linked to above:

Many of the citations you are asking for fall within the “carpenter fallacy” that we present in the text, i.e. that discussions about carpentry are not relevant to and distract from identifying the risks associated with gambling, even though the construction of a roulette wheel involves carpentry.

This is not to say that Trevor Charles is wrong here and that Nassim Taleb is right—I feel unmoored in this whole discussion—but I do like the quote.

Speaking more generally, I suppose that Taleb’s precautionary principle could fruitfully be expressed in terms of tradeoffs. Here’s the principle:

If an action or policy has a suspected risk of causing severe harm to the public domain (affecting general health or the environment globally), the action should not be taken in the absence of scientific near-certainty about its safety. Under these conditions, the burden of proof about absence of harm falls on those proposing an action, not those opposing it.

As a statistician, I tend to be skeptical about arguments based on “the burden of proof” or “scientific near-certainty,” as they have a bit of the flavor of the one-sided bet—but what is relevant here is the idea of correlated risks.

As many observers have noted, the U.S. is in many ways a hyper-individualistic society, and social policies are often evaluated in an individualistic way. But there’s a big difference between risks that are uncorrelated or only weakly correlated in the population (for example, getting killed in a car crash) and highly correlated risks (with the paradigmatic examples being asteroid impacts and global wars).

As Taleb has written, his own attitudes on extreme events derive in part from his understanding of what happened to Lebanon in the 1970s, when a longstanding apparent equilibrium was revealed as being unstable, and which gave him a general wariness about picking pennies in front of a steamroller.

This is not really an answer to what policy should be on genetically modified organisms, but I do think that it makes sense, for the reasons Taleb and his collaborator say, to consider these global risks associated with GMOs in a different way than we treat the individual-level risks associated with electric power lines and cancer, or whatever.


  1. NR says:

    Jon Baron once said (not about this particular argument) that the PP is an example of omission bias, since it ignores any potential benefits. Taleb’s argument is that costs can be infinite and benefits are bounded, so traditional cost-benefit analysis isn’t appropriate and we should focus on the risk of ruin, but I’m not sure that the benefits of GMOs couldn’t also be infinite or at least very very large.

    • Rahul says:


      People see the dangers of action but ignore the perils of inaction.

    • Stanislaw says:

      They claim that traditional cost-benefit analysis are not applicable in
      situations of ruin without ability to recover, because “outcomes may have
      infinite costs”. And the best risk management strategy can do is “ensuring that
      actions which can result in ruin are not taken”. And when ruin is inevitable,
      no actions can be taken at all.

      Their answer to this conundrum is precautionary principle (PP), but if we read
      carefully they also give another answer hidden in description of the paralysis
      fallacy. Indirectly they claim that there is finite utility a single individual
      can obtain in his or her lifetime. Extending this reasoning to aggregate
      utility of all potential individuals over the lifetime of whole universe, it is
      reasonable to argue that all those are bounded. Supposing the goal is to
      maximize aggregated utility of all individuals over their lifetimes, the
      inevitable ruin merely prevents us from further increases in utility, instead
      of leading us to negative infinity. Thus traditional cost-benefit analysis is
      appropriate as usual.

      It seems to me that they don’t really give any argument why those outcomes
      would have infinite costs, and without this assumption their whole argument
      fails. Of course, question whether PP is of any help in this situation is
      completely different thing.

      • James C. Whanger says:

        Ruin appears to be a term used for a potential catastrophic outcome. It is difficult for me to understand how traditional cost benefit analysis is adequate to this. Unless you are arguing that traditional cost/benefit analysis would assign a weight to the potential catastrophic outcome that would essentially nullify all other possible utility and thus the result is essentially the same as the PP perspective.

  2. Rahul says:

    A part of this is the absence of guilt by inaction. Sure we could shun GMOs by an over abundance of caution. Or we could demand exceedingly high standards of certainty.

    But the flip side is that we live in a world with a lot of underfed and impoverished people. I think GMOs offer a very promising tool in the array of weapons to increase standards of living in the developing world.

    I guess it depends on your vantage point. For a prosperous American the hypothetical, remote, existential risk posed my GMOs would outweigh any heartache about disadvantaged third world poor.

    • Z says:

      You could ban them in America but allow them in other places.

      • D.O. says:

        Why? Either GMOs will ruin everything or are not a big deal. If they will ruin the whole Earth ecosystem, who cares which continent is destroyed first. If not, why America should not use them?

    • afeman says:

      Promising tools are hypothetical as well. Are there demonstrated advantages to GMO solutions to food security in developing regions? Golden Rice is used as an example, but the promotion of it seems to rarely come from parties primarily interested in food security. Is that changing?

      • Rahul says:

        There’s a lot of politics & vested interests in the picture which convolutes the narrative. I’m not a Monsanto fan. The law in the GM area could do with a bunch of reform. One can like the technology but dislike the company.

        To me the fundamental question is whether or not a particular GM crop gives higher yields ceteris paribus or not versus the status quo.

        Are you contesting that, at least in some cases, there are definate productivity advantages to going GM?

        • afeman says:

          Not sure why the M-word got invoked. I’m asking what definite productivity (or other) advantages have been demonstrated. I’ve heard of papayas being successful, but corn and soybeans not.

          • Rahul says:

            Here’s one recent review in PLOS ONE by Klümper & Qaim from Goettingen:


            They conclude:

            “On average, GM technology adoption has reduced chemical pesticide use by 37%, increased crop yields by 22%, and increased farmer profits by 68%.”

            Sounds pretty strong evidence of a productivity increase to me.

            • Andrea says:

              I would agree with you. However, Taleb et al. reply specifically to that citation by saying that “The article you cite considers an entirely different question which is the level of pesticide use in general, i.e. not just that directly applied to a crop. There are a wide variety of effects of GMO crops on the use of pesticides. Some of them change the total amount but increase the amount applied directly. The trend has become toward increased pesticide use even in this case because of the natural evolution of pesticide resistance in the weeds. Here is a relevant reference:

              S. B. Powles, Evolved glyphosate‐resistant weeds around the world: lessons to be learnt, Pest management science 64.4 360-365 (2008).

              See also summary in

              In other parts of their reply, they stress that even if corn producing BT toxin allows a reduced use of pesticide in agriculture, the amount of pesticide you get in your plate is higher because the corn itself produces it. In other words, they claim that although the total use of pesticide in agriculture has decreased, as explained in the paper you cite, the amount of pesticide we eat may have increased. I don’t know if that’s true, but it may make for an interesting research topic.

              • Foster Boondoggle says:

                What’s curious about this case along with a lot of the rest of their discussion is how much time they spend on risks having nothing to do with “ruin”. There’s a “Gish Gallop” of claims about possible subtle health effects that could perhaps be real (though this is one place where the biological domain knowledge they dismiss becomes relevant), but none of it relates to the possibility of ruin, unless you presume that the hidden health effect might be that we all become sterile (including the “us” that don’t eat anything genetically engineered). This is particularly noticeable in the response to Trevor Charles, which reads more or less like a bog-standard compendium of “might” and “could” claims of health impacts from an anti-GE NGO. To take your specific example, why does the amount of pesticide in our food have anything to do with risk of ruin?

                None of the arguments of this type do anything at all to support the claim of potential catastrophe. That they are made at all shows that the motivation of the authors is just that they don’t like GMOs and are looking around for reasons to support their feelings.

  3. Z says:

    ‘global risks associated with GMOs in a different way than we treat the individual-level risks associated with electric power lines and cancer’

    Why are electric power line risks individual-level? Aren’t they also environmental risks? Is it because people can choose whether to live near power lines? But can’t they also choose not to eat GMOs?

  4. Jonathan (another one) says:

    I’m not a huge fan of Cass Sunstein in general, but his book on the Precautionary Principle (summarized in the earlier article here ) is required reading.

  5. Z says:

    I think the problem with applying the precautionary principle is defining a ‘suspected risk’. I think we only want the burden of proof to shift to demonstrating safety once the suspected risk has passed some minimal evidence threshold. You can find cranks who believe that anything might cause anything, and in the vast majority of cases we don’t have conclusive proof that the cranks are wrong, but we don’t let that stop us from doing everything. Where the ‘take the risk seriously’ threshold lies will depend on prior beliefs and vary from person to person. In the case of GMOs, it seems that most biologists assign low prior probability to GMOs being dangerous because there’s no known likely mechanism, so they don’t take that risk seriously given the scant evidence so far.

    • Ben says:

      This is a nice idea, but the problem then reduces to one of expertise vs interest. Nuclear scientists assigned low probabilities to risks of radiation for many years, at least in part because the science was interesting and their careers depended on it.

      I’m not a biologist, but in the case of the GMOs it’s often forgotten that the ‘risks’ are not simply limited to the biological… if they are more productive they also hand further control over food production to a small number of people running hi tech firms in the US. The risks to the livelihoods small farmers in developing countries of that are not clear, but are one of the primary objections to GMO among some left-leaning critics.

  6. Martin says:

    I find the discussion about the application of a precautionary principle more interesting in the context of a topic where one must actually do (rather than not do) something. It’s fine to say ‘actions should not be taken’ when it comes to GMOs – but it is hard for me to grasp what that means w/r/t climate change, where policies have to be implemented to change the status quo, which bears the risk of global catastrophe. But by how much should it be changed? This is the typical question related e.g. to what the policy implications of Weitzman’s Dismal Theorem are.

    So, what are the consequences of a precautionary principle that would require us to actively do something, other than “We have to do something. Now. And a lot.” Are there any? Concerning AGW, I have the unfortunate impression that people keep using a precautionary principle as a mere backdrop against which to demand whatever policy preferences they have anyway.

  7. Brian says:

    Taleb’s precautionary principle? Really?

    Scanned the paper, seems like his formulation suffers from the usual problems of relying on vague terms and ultimately being incoherent. Like others have pointed out, Sunstein and others highlighted these issues decades ago, but Taleb doesn’t seem much of a guy for literature reviews.

    • Rahul says:

      Is anyone on that author list actually a subject expert? i.e. Any geneticists, or agronomists, plant biologists etc?

      Another telling sign is the Citaton List. I read 26 citations but cannot even see *one* that has anything to do with GM Organisms or agriculture or pesticide resistance etc.

      Taleb is annoying.

      • Brian says:

        no subject matter experts, think they are all maths / expert systems guys (and a philosopher)

        In fairness, they do refer to one of Sunstein’s working papers, which (from memory) they misread and dismiss with the following bizarre statement:

        “[Sunstein’s] method of analysis misses both the statistical significance of nature and the fact that it is not necessary to believe in the perfection of nature, or in its “benign” attributes, but rather in its track record, its sheer statistical power as a risk evaluator and as a risk manager in avoiding ruin.”

        So nature is statistically significant, a risk evaluator, and a risk manager with sheer statistical power.

        Intruiging. I will need time to digest this.

      • D.O. says:

        Didn’t you get the “carpenter fallacy”? We need no carpenters to tell us about roulettes. Even if it’s not a roulette, but a trap-door. We will simply discuss the statistical properties of revolving trap doors.

    • Ricky says:

      Oh please. Carpenter fallacy for sure. Probability theory is the logic of science, just seems like you’re trying to get your hate across for no reason.

  8. Roy says:

    I believe Taleb’s argument goes a little deeper with respect to GMOs. He argues that in the case of GMOs we are making a fundamental change to a complex system, and as such it is impossible to test for what will be the response, and the response may well not be seen by humans till many years later, and the magnitude or effect of that response on humans is unknowable until it happens, and by then it may be irreversible. So testing the direct effect of GMOs on humans is meaningless (there is one web site that mocks Taleb based on this), as that is not what he argues. He also puts it into the category of things that have limited upside (you could argue this point, I am not commenting on whether he is correct, just what his argument is) and potentially very large, and in these cases unknowable, downsides. These are the situations where he recommends using the PP.

    As an example of the problem of complex systems, suppose (and this is purely hypothetical) that GMOs start off a chain of events in ecosystems that kill off bees over a decade. Now that would have a serious downside that wouldn’t be seen immediately. Even more, the number of such possible scenarios is quite large, and one of his main points is that we don’t understand complex systems well-enough and never will.

    You don’t have to agree with the line of argument, and many people get turned off by Taleb’s name calling and smugness, but there is some reasonable thinking in his position. It comes down to what should be the course of action when we can neither enumerate the possible outcomes nor estimate their likelihood.

  9. Olin says:

    I have never understood how people who claim to have thought deeply about GMOs can equate GMO tomatoes modified with tomato genes (e.g. Flavr Savr) with tomatoes modified with fish genes (e.g. flounder). This only suggests to me that they haven’t really thought deeply, and cannot admit to themselves that most things are not black and white.

    • Foster Boondoggle says:

      Because there’s no such thing as a “tomato gene” or a “fish gene”. We share a large fraction of our genes with both organisms. Are we part fish and part tomato?

      • Olin says:

        Sorry I should have clarified (and this is getting a bit off topic). By tomato gene I mean a piece of DNA that is already in the organism of question (i.e. that specific strain of tomato). I don’t mean a gene that is homologous to that gene, but is in a fish, or in a human, or even a gene that is homologous to that gene, but is in another strain of tomato. Like if I were to make a clone of myself, but add an extra piece of DNA that was already present [in myself]. This is the Flavr Savr.

        As an addendum, if there was a gene had only ever been found in, say, flounder, and which had never been found in any other organism ever studied (and it is well established that there are plenty of these genes) – surely you would consider this a “flounder gene” or something close to it?

  10. MFK says:

    I get his perspective, but I guess controversy in the absence of a causal mechanism isn’t worthing paying attention to. If there were reasons to suspect that GMOs are dangerous, even without proof, that’s one thing. But it’s not all clear that’s the case.

  11. Corey says:

    I prefer to call the Precautionary Principle by its original name, Pascal’s Wager.

    …Aw dammit, and here I was thinking I was so clever. Originality fail.

  12. Foster Boondoggle says:

    Note just as a matter of polemical style that invoking the “carpenter fallacy” is really just a mechanism to instantly dismiss anyone with actual domain knowledge from the discussion. As others have noted, the carpenter fallacy is itself fallacious: if you want to know whether or not the roulette wheel is fair, it is probably helpful to understand a bit about its design and construction. Only for an idealized (not real) wheel is specific domain knowledge irrelevant.

    As Z said, there has to be some threshold level of plausibility met for one to take the argument seriously. It has to be more than “something terrible *could* happen”, because something terrible could always happen. Remember the lawsuit trying to prevent the LHC from starting because it might make a black hole & destroy the earth? Rejected on grounds of standing (ha!) but the science was clear. Here the argument is predicated on “GMO” being “systemic” where that term is never clearly defined. And there’s the argument/claim that all GMOs have something in common, and that it’s fundamentally different than, say, mutation. The only thing they have in common is that they’re done deliberately rather than randomly. This seems to be the basis for the claim: do something deliberately and it’s systemic and therefore potentially infinitely risky, while natural mutation is not. Doesn’t seem like there’s a lot of substance to that argument.

    • Rahul says:

      Taleb does this sort of thing quite often. He has this attitude that his complex-systems approach can dispense with the domain expert.

      e.g. This Working Paper refuses to get its hands dirty at all by touching any of the actual subject matter details of GMO risks & danger modes.

    • Andrea says:

      Very well said. Often it looks like Taleb is belittling the domain experts, which hardly leads to constructive criticism.

    • Matt says:

      Exactly. He never provides a methodology to identify which risks are systemic or catastrophic. He seems to assume its obvious. But if there’s disagreement, and if domain expertise is irrelevant, then there’s no way to resolve anything.

    • James C. Whanger says:

      Perhaps you are missing the point that the decision structure as proposed is intended to be at a higher level than domain knowledge and that domain knowledge would indeed come into play as directly stated here.

      From Taleb et al p. 2:

      2.1 What we mean by a non-naive PPRisk aversion and risk-seeking are both well-studiedhuman behaviors. However, it is essential to distinguishthe PP so that it is neither used naively to justify any actof caution, nor dismissed by those who wish to courtrisks for themselves or others.The PP is intended to make decisions that ensuresurvival when statistical evidence is limited—becauseit has not had time to show up —by focusing on theadverse effects of “absence of evidence.”Table 1 encapsulates the central idea of the paper andshows the differences between decisions with a risk ofharm (warranting regular risk management techniques)and decisions with a risk of total ruin (warranting the PP).

      • Rahul says:

        The devil is in the details. How do we quantify “the risk of total ruin”? That’s where the domain expert comes in ideally.

        Now, do we interpret Taleb as saying that any risk of total ruin, no matter how small, needs to be shunned? If so, public policy would immediately ban most fossil fuel power plants, disallow the LHC, etc.

        But I don’t think we want to do that.

        So then, Taleb needs to propose a more nuanced application of his principle. But I don’t see that at all. In fact, if you shun the domain expert’s inputs I’d be surprised if you can progress at all because all you are left with is a broad, risk-measure-agnostic Yes-No decision.

        • James C. Whanger says:

          I don’t think you can reasonably read the sentence, “However, it is essential to distinguish the PP so that it is neither used naively to justify any actof caution, nor dismissed by those who wish to court risks for themselves or others.” and interpret it as “Now, do we interpret Taleb as saying that any risk of total ruin, no matter how small, needs to be shunned?”. I do not see where anyone including Taleb is arguing to shun domain expertise. In fact it is built into the process and essential to determining whether or not something can be appropriately assessed with his decision model.

          Much of the discussion on this blog seems to be, of the “I don’t like Taleb and therefore, I don’t like anything he proposes either.” The fallacious reasoning is quite apparent, but I do not see it in Taleb’s argument’s.

          • Rahul says:

            So if Taleb is not shunning domain expertise then why write a 20 odd-page whitepaper on GMO without a single subject expert on the author list?

            Or why have 20-odd citations on everything from psycho-surgery to cosmetic surgery but hardly any work directly pertinent to GMOs?

            Why adopt a position that targets GMOs as a monolithic whole instead of grappling with the nuanced & specific risks of the many individuals flavors of GMO?

    • Ricky says:

      No, it’s a way so that people who will contribute towards tail risk *will not* be able to casually dismiss anyones opinions, Taleb’s been right on a lot through the years and he has the respect of the author of this blog and many others(Kahneman), these sort of dismissals are not sufficient.

      Are you willing to put your reputation on the line if GMO’s actually do contribute towards tail risk? Call it skin in the game. Probability theory is the logic of science for a reason, right?

  13. Tom Dietterich says:

    The biggest fallacy in this whole discussion is the idea that “GMOs” is a well-defined class of objects (or actions) with a well-defined risk. Genetic modification is a technique that can be applied in a whole range of situations. We make life-saving drugs by genetically modifying bacteria. We make pest-resistant crops. We make sterile flies and mosquitoes. Some scientists are even experimenting with making disease agents more virulent. These systems are vastly different and pose very different risks and benefits. I think the appropriate approach is to analyze each case separately and of course take into consideration not only the gene that is being manipulated but also the larger economic and ecological risks. Treating all GMOs the same is like treating all applications of plastic the same.

    In some systems where the outcomes are highly uncertain, a risk-sensitive decision-maker will choose not to deploy the technology but instead to study the system further to reduce the uncertainty. I don’t think this requires appeals to ruin problems and notions of infinite risk.

    (I was surprised to learn recently that Monsanto is one of the largest suppliers of seeds for organic farming. Some time we should have a separate discussion on the risks of organic farming. Many more people have been made sick by organic vegetables than by genetically-engineered crops! I guess Monsanto is out to kill us off one way or another :-)

    • whoo with all these comments I was beginning to be shocked that no-one had said this yet. I’ll just +1 it.

    • Ricky says:

      “The biggest fallacy in this whole discussion is the idea that “GMOs” is a well-defined class of objects (or actions) with a well-defined risk”

      The entire point of risk management is to be able to use it in the face of incomplete knowledge and uncertainty. That’s what black swans are, things you do not know.

      • The point is, with GMO foods, we know what we’ve done to them, and what we’ve done varies a lot. Adding beta carotene genes to rice is very different from adding Bacillus thuringensis toxin genes to soybeans is very different from adding an extra copy of a gene already found in tomatoes back to those tomatoes with a different promoter sequence to upregulate that gene… is very different from adding genes that resist a manmade herbacide (Roundup-ready)

        Selection of natural variants has already been shown to have similar effects. Selecting a seed for some fruit that happens to be infected with a virus that incorporated some gene into the genome of the plant that happens to produce effects we like (flavor, or frost-resistance or something) isn’t really very different from inserting the gene using an engineered virus, except we can pretend that the first one is “natural” because we don’t know how it got there… until someone comes along and figures it out decades later.

  14. phayes says:

    “This is not really an answer to what policy should be on genetically modified organisms, but I do think that it makes sense, for the reasons Taleb and his collaborator say, to consider these global risks associated with GMOs in a different way than we treat the individual-level risks associated with electric power lines and cancer, or whatever.”

    What global risks? It seems to me that Taleb et al’s ill-informed GMO doom-mongering makes less sense than the LHC black hole / strangelet doom-mongering did, and is less relevant to policy.

    • Steen says:

      As someone who has made transgenic plants, I second this comment, and the one by Tom D. immediately above. Transgenesis is a natural process (see recent PPNAS paper—it appears that all the sweet potatoes we eat are naturally “GMO”), well understood and far more precise than many plant breeding methodologies such as radiation mutagenesis (e.g. ruby red grapefruit) and colchicine-mediated wide crosses (e.g. triticale).

      I invite anyone who agrees to endorse the official a petition started a few days ago:

      Perhaps I am too emotionally invested in this topic, but I think that the current regulation is excessive for the small correlated risks associated, and that the high cost of approval has promoted consolidation in the industry. See e.g.

      Andrew lists ‘electric power lines and cancer’ as individual-level sources of risk, but I think transgenesis is more comparable to ‘electronics’ as a technology or ‘synthetic chemicals’ as a technology.

      Public debate on this topic will be more interesting once transgenic products that benefit consumers (e.g. nonbrowning apples, reduced acrylamide potatoes) hit the market.

      • phayes says:

        European regulatory policy has suffered the attentions of the irrational and unethical anti-GM movement too:

      • G. says:

        The response of “GM being more precise” always tend to come up, and always bothers me.

        All agree that GM has a potential to improve the current state of crops, and to provide us with crops that would not be possible (or very difficult) to breed via non-GM techniques. But that implies that the properties of GM-breeding differ from non-GM-breeding. Especially in the traits-that-are-possible, but it also implies that the way that unexpected or unwanted consequences show up differ between GM and non-GM-breeding.

        Now, non-GM breeding are in my layman understanding approximately the same as acceptance-rejection sampling – shuffle the genomes (usually multiple different genomes) (and/or include mutations), check if the shuffling leads to desirable gain-of-function. If so, accept the sample, and repeat.
        GM-breeding, on the other hand, are more akin to looking through the genome, saying “huh- adding this snippet here should create an interesting/positive effect”. Add the snippet, evaluate the effect, and start using the new variety, or try something else.

        In the first case, we have a stochastic element; both gains and side effects should follow some sort of probability distribution. Plenty of breeding throughout the years give us an idea of the imprecision, likely gains etcetera. In the second case, there is no such stochastic element; the expected effects are governed by the precision of our understanding of how the biological process works, and (unexpected) side effects governed by the gaps in our knowledge and understanding. In a way, you are right that it is more precise, but it is also more random, because there is little to tell exactly what unexpected synergies that may or may not crop up.

        Even if you do not agree that the second approach is *more* dangerous, surely you have to agree that the pattern in which problems and issues show up will be inherently different between GM and non-GM breeding?

  15. dax says:

    Contra the headline, it is not Taleb’s Precautionary Principle. The Precautionary Principle dates from Rio 1992 or earlier, even much earlier, if you like.

  16. Alex says:

    Yeah, I can’t say I’m impressed overall. Taleb’s argument against GMOs seems to boil down to the claim that they’re a top-down, systemic change. But simply being top-down can’t be sufficient; a lot of human endeavor involves top-down changes. And to claim that GMOs are systemic as opposed to local requires some kind of biological mechanistic argument. So the carpenter fallacy goes out the window. He seems to recognize this is the case because in the reply to comments link, he provides a citation when that exact question is asked (comment 3). He also says that simply noting system connectivity is sufficient, but this can’t be true because he says that lots of aspects of our current world are connected in ways they weren’t previously, but his concerns seem limited to GMOs. I wonder, for example, if Taleb is concerned about the internet (maybe emotional contagion via Facebook?).

  17. Peter Dorman says:

    I’m coming to this discussion very late, and maybe everyone else has already signed off. I might be talking to myself, but here goes.

    1. I don’t know very much at all about GMO technology (or technologies). My general sense of nervousness comes more from the side of who’s doing it rather than what “it” is. I am less comfortable with any private company doing this stuff (not only the M people) than public sector researchers, although even here I worry that academics might have future economic payoffs in their eyes. The point is that externality risks of the sort that Taleb is concerned with are likely to be downplayed by those who are guided by their private returns. That’s true with any technological development, of course, but Taleb’s point, echoed by Andrew, is that GMOs appear to belong to a class of experimental techniques where downside externality risk is relatively substantial.

    But this is an argument about incentives, not about the PP.

    2. I’ve been thinking about the PP since before it existed. (I didn’t know it was the PP back then.) What I was trying to understand was how to rethink regulation in a world in which most regulations, especially for public health and the environment, are systematically tightened over time. If you have a threshold limit for a contaminant equal to x, and then a few years later you reduce it to ½ x, it means that your initial regulation was too loose. Of course, you didn’t know that back in the day. But what if this is a pattern, so that you are nearly always going from x to ½ x and hardly ever from ½ x to x? Knowing this pattern, shouldn’t you change how you regulate?

    Information-efficient regulation should yield unpredictable changes in regulatory standards in response to new information, just as an efficient market should yield unpredictable price movements.

    Where I ended up (and wrote this in an article published in Ecological Economics several years ago) is that intelligent regulation requires that we analyze what we don’t know as well as what we do. What I didn’t express very clearly is that translating this into a PP can’t really be algorithmic; it’s substantive in the field in which our knowledge and ignorance lie. There are some fields in which no particular pattern to our learning over time is apparent, and in those instances we can say our ignorance is unbiased and focus only on the information we already have. There are others where our ignorance is palpably biased, where new information predominantly points us further in the same overall direction. I’d argue that most ecological and public health issues lie within that domain. When figuring out what to do with a new technology we should take into account not only our assessment based on current knowledge but also the likelihood that new knowledge, which we will acquire in the future, will cause us to weight more heavily the externality risks.

    3. I think this can be expressed in Bayesian terms, but I’m not the best person to do this.

    4. Coming from this perspective, I am not impressed by claims that, based on what we know today, some particular GMO product looks to have expected net benefits—the cost-benefit criterion. I also want to know, how have our assessments of this or related products changed over time, and what reasonable expectations do we have regarding the questions pertaining to it over which we are currently ignorant? At the very least, I’d like to see an accounting of what we know or suspect we don’t yet know.

    5. I realize this is about what I think and not about Taleb. My apologies.

    • Andrew says:


      Only in blog time is a 2-day delay “very late”!

      If it makes you feel any better, I wrote this post months ago and it’s just been sitting in the queue for awhile.

    • Steen says:

      Forget this application of the PP—lets talk about research on Potential Pandemic Pathogens, especially so-called “gain-of-function” experiments on influenza virus and the like. The White House imposed a moratorium and a firm has been hired to do a formal risk-benefit analysis. In this case caution is preventing us from learning some of the things we know we don’t know that we would like to know.
      Many links here:

      • E. Somanathan says:

        This discussion needs to be set in context. If you live in northern India, as I do, you are already living in the middle of a nutritional catastrophe. To well-fed Europeans who occupy most of the media space, it may seem that the gains from agricultural improvements are negligible. Actually, they are a possible path out of a catastrophe. See
        However, Greenpeace and others create doubt in order to paralyze action and they have been very successful in this strategy just as some oil companies have been with regard to climate change. Poor children in poor countries who will stay malnourished are just collateral damage. The precautionary principle comes in handy.

    • Anonymous says:

      Time for some context here. As Rahul pointed out above, the discussion on GMO’s is skewed by well-fed Europeans who occupy most of the media space. If you live in northern India, as I do, you are already living in a catastrophe. Preventing actions that could be taken to get people out of it ( without strong evidence of further dangers to others is ethically questionable, to say the least. And this is what a lot of people are doing. What the linked article shows is that the precautionary principle is used as a strategy to cast doubt on action that can be taken to address malnutrition — for reasons that have nothing to do with malnourished children. They are simply collateral damage. It is not very different from coal and oil companies’ past media strategies with respect to climate change.

  18. cheese_d says:

    On the whole I thought this was a very good discussion.

    I am not totally against the precautionary principle at this point… but I do worry that invoking it frequently tells you more about the myopia of the speaker than it does about reality and good decision making. (From this vantage I’d tend to follow AG’s one-sided bet concern -> Pascal’s Wager.)

    I also think AG’s concern for dealing with highly scalable, global, risks differently than the local electric lines, is very much on mark.

    I’d close by pointing out that the precautionary principle itself may be a highly scalable (and dangerous) tool. Here is an excerpt from an excellent talk by Tetlock et al over at from a couple months ago.
    Sutherland: The probability is not on its own a decent basis for a decision for the simple reason that there are things like the precautionary principle which might mean that a 5 percent chance of Iraq possessing WMD might be sufficient to justify invasion.

    Kahneman: Well, Cheney went further—any nonzero.

    • John Thacker says:

      Certainly there are people invoking the precautionary principle to avoid taking any Syrian refugees – Taleb might even be sympathetic to that given his Lebanese experience.

    • gregor says:

      I don’t think preemptive war is a good example of the PP, as advocated by Taleb. The key idea is that we don’t understand how complex systems work and therefore ought to exercise caution when tampering with them. Preemptive war seems like the opposite of Taleb-style thinking because it assumes you can interfere with a complex system (stage a coup in a foreign country) and yet get predictable results (“democracy,” etc.). If a foreign enemy intends to harm you, all action or inaction is risky, and there’s no universal strategy for optimally mitigating that risk.

      Second, the focus is on risks that are systemic, global, and irreversible. Taleb argues that risks to the global food supply meet these criteria and tampering with the food supply needs to meet a high burden of proof for safety. I don’t think Iraq could reasonably be said to have posed this sort of risk. If they had had a nuke and had managed to drop it on Los Angeles (an exceedingly far-fetched scenario), that would have been very bad, but it would not have been a total ruin event.

  19. Eli Rabett says:

    The problem with discussions about Genetic Manipulation is that it is a wide ranging technology being used for many purposes but talked about as a single thing. What is the tail risk with computers? Hint, given how they have penetrated every area of our lives, it ain’t zero

  20. Ragnar says:

    As I read it, in effect, this principle suggests that anything which might harm an entity someone defines as “the public”, must be forbidden unless that same someone is convinced that a “scientific consensus” as defined and assembled by that someone, says it will not cause harm.

    This appears to give near carte-blanche authority for government to intervene in virtually all human and non-human activities, and provide great authority for those entities who define harm, define “the public” and define “scientific consensus”. People may imagine that such decisions will be made by “disinterested” public servants and scientists wearing white lab coats, but I have a different opinion of the sort of people likely to drive such policy. Rather, these are the things that Henry Hazlitt (and Hayek for that matter) would be highly opposed to. He might be in favor of a principle in relation to government intervention in economic affairs that essentially translated into “first, before acting, make sure it will do no harm”

    In practice, this principle has something in common with Pascal’s Wager. Whomever creates the scariest scenarios (the greatest possible downside) relative to the finite upside of actual positives, wins. On this framework, international travel also seems fairly insupportable, as it enables deadly viruses to spread globally and potentially destroy the whole world, rather than just one continent or country. Or global financial trading, which makes the possibility of global financial collapse possible, despite the finite tangible good it provides every day.

    I suppose I could agitate for a counter –precautionary principle action, claiming that without GMO, an increasing number of the public would starve to death due to declines in agricultural productivity, that we wouldn’t know until it was too late, etc, and I could probably find a “scientific consensus” of agricultural scientists to agree with this.

Leave a Reply