Skip to content

The bullshit asymmetry principle

Jordan Anaya writes, “We talk about this concept a lot, I didn’t realize there was a name for it.” From the wikipedia entry:

Publicly formulated the first time in January 2013 by Alberto Brandolini, an Italian programmer, the bullshit asymmetry principle (also known as Brandolini’s law) states that:

The amount of energy needed to refute bullshit is an order of magnitude bigger than to produce it.

It became especially popular after a picture of a presentation by Brandolini at XP2014 on May 30, 2014, was posted on Twitter. Brandolini was inspired by reading Daniel Kahneman’s Thinking, Fast and Slow right before watching an Italian political talk show with journalist Marco Travaglio and former Prime Minister Silvio Berlusconi attacking each other. A similar concept, the “mountain of shit theory”, was formulated by the Italian blogger Uriel Fanelli in 2010, roughly stating the same sentence.

Brandolini’s law emphasizes the difficulty of debunking bullshit. In contrast, the faster propagation of bullshit is an old proverb: “a lie is halfway round the world before the truth has got its boots on”.

Two questions then arise:

1. Is this principle true? Or, more specifically, when is it true and when is it not?

2. To the extent that the principle is true, where is it coming from? I can think of a couple theories:

a. Asymmetry in standards of evidence: it’s much easier to suggest that something might be true than to demonstrate conclusively that it’s not the case. For example, consider “cold fusion”: A single experiment with anomalous results got lots of attention, but it took a lot of effort to figure out what went wrong.

b. Ethical asymmetry: The kinds of people who bullshit are more likely to be the kinds of people who misrepresent evidence, avoid correcting their errors, and intimidate dissenters, so at some point the people who could shoot down the bullshit might decide it’s not worth the trouble: Why bother fight bullshit if the bullshitters are going to turn around and personally attack you? From this standpoint, once bullshit becomes “too big to fail,” it can stay around forever.

P.S. In comments, Kaiser writes:

I have to speak up for the other side. Brandolini’s Law is false.

Counterexample: it takes mathematicians very little time to shoot down “obviously wrong” claimed proofs of any number of unsolved problems. Many statistical errors are also relatively easy to spot – and surely the researcher spent more time manufacturing the evidence (I’m thinking Wansink).

Secondly, the claim of an order of magnitude difference is absurd. I just proved my first point.

Good point about Wansink. It took him decades to construct a palace of bullshit. Sure, it took effort for the skeptics to reveal the emptiness of this edifice, but the effort of this debunking was still much less than the effort of the original construction.


  1. Jonathan (another one) says:

    I agree with both theories. Here is a third: entropy. There are many more ways to be wrong than to be right. While all (true) data is consistent with the truth, it is also consistent with large swaths of false-space, simply because false-space is so large. Bullshit that sits in a hard-to-refute section of false-space is hard to differentiate from truth, but really easy to find, and therefore defend.

    • Jonathan (another one) says:

      As I think abut it, this is sort of a sub-species of your point (a).

      There is another reason. People asserting bullshit have an (undeserved) first-mover advantage. You’ve discussed this before when showing that people have a strong preference for the first article, while the subsequent refutations are just the ravings of small-minded nitpickers.

  2. Anonymous says:

    Quote from above: “The amount of energy needed to refute bullshit is an order of magnitude bigger than to produce it.”

    Funny/interesting sentence in relation to large replication efforts like “Registered Replication Reports”, and the tons of resources that are used (and possibly get wasted by) doing them.

    I reason tons of energy, and resources, possibly get wasted by giving attention (a 2nd time around) to “bad” research, and “bad” researchers.

    I even think these efforts to “correct” matters may actually start the problematic cycle again. For instance, see my comment concerning a recent example of the consequences of such a large replication, and the possible start of the problematic cycle again here

    (Now, to possible prevent/solve this all, i could only think of the following idea. The idea tries to do many things, one of which is to try and circumvent spending time, and energy, and other resources, on “bad” research and researchers.

    This is attemtped by providing a research and -publication format that tries to be of such high quality, and reinforces the things that make science fun and worthwhile (e.g. theory building). Here is the idea + some thoughts of how it could possible stop spending more resources to “refute bulls!t”:

  3. Thanatos Savehn says:

    Do a search on “Morgellons disease” and you’ll find part of the answer. A misguided attempt by a TV news show to explain how an evidence-free non-disease had generated buzz in some diverticulum of the www managed instead to produce a surge of people discovering strange fibers growing out of their skin. More than a decade later, boots long since laced up, the truth seems if anything to have fallen further behind. Good stories trump good evidence.

    • Sorry but international travelers now know that they should flee any physician who uses the archaic psychoanalytically tinged diagnosis of “Morgellons disease.” The symptoms that are often attributed to this imaginary condition can, with careful interviewing, be related to exposure to scabies now endemic to youth hostels and even fancy hotels. The diagnosis of Morgellons disease leads to prescription of an anti-anxiety or even antipsychotic medication. The appropriate diagnosis of scabies leads to effective treatment with permethrin. “Morgellons disease” has become an example of strongly held archaic views trumping evidence

      • >Sorry but international travelers now know that they should flee any physician

        *ought to know* perhaps but do you have any evidence that a large majority of them *do* know this? I certainly have *never* heard of “Morgellons disease” and so if faced with a doctor telling me about it, I’d certainly have to look it up. If I did so on the internet I wouldn’t get “Morgellons disease is a false diagnosis for scabies infections” I’d get a crapload of controversy, including articles like this: which do in fact suggest that there is a psychological phenomenon that is distinct from scabies infections.

      • Nick Adams says:

        Morgellon’s syndrome is just a variation of parasitosis (delusions of infestation) which is fairly common. An affected person will continually pick at the areas where they think they infested ( or have microscopic fibres in their skin) thus producing sores which are then taken as further evidence of the infestation. There should be no confusion with scabies which has a different skin distribution.

        • Manoel Galdino says:

          So, does the disease exist or not?

        • Thanatos Savehn says:

          Apparently I’ve lost the ability to communicate effectively so I’ll try again. In 2018 there were at least a dozen peer reviewed and published articles with “Morgellons disease” in the title. The publication of such articles has been accelerating ever since the media tried to demonstrate that there was no evidence of it existing outside the mind of those who claim to suffer from it. So, *deep breath*, the point is that notwithstanding more than a decade of telling people they don’t really have fibers growing out of their skin, of showing them that what they’re pulling at are just bits of shirts or carpet rubbed into their skins, the number of sufferers only grows (now worldwide) and the number of people publishing articles about this non-existent disease similarly grows. I found this new letter to the editor sorta helpful:

    • nick adams says:

      Hey, your right! Today my (reputable) metropolitan paper had an article that included the line “…exotic diseases such as Ebola or Morgellons”.
      You can’t get a much more *real* disease than Ebola – known pathogen, unequivocal clinical findings, fatal consequences – and here it is bracketed with a pseudo-disease.

  4. Dikran Marsupial says:

    Bullshit is characterized by not caring whether an argument is true. If you are happy to refute bullshit without caring whether your counterarguments are valid, then the asymmetry goes away.

  5. Dirk Nachbar says:

    bullshit is like a strong (non flat) prior in the wrong range of x. you need a lot of data to get a more true posterior.

  6. Marcus Crede says:

    Clearly we need to develop a function for the amount of energy that is expended to refute the bullshit. Perhaps something like:
    (amount expended to produce BS)*(2+[1 if TED talk is given on topic]+[1 if published in Science, Nature or PNAS]+{1 if championed by someone in Ivy Leagues]+[1 if featured on NPR]+{1 if resulting in NYT bestselling book].

    Of course at some point this amount crosses some sort of Schwarzschild radius of bullshit and the “theory” becomes a vampire theory that cannot be killed by mere evidence.

  7. gec says:

    Connected with (b) is that there is usually a reason that bullshit exists that is independent of its relationship with truth. After all, if ideas were generated independently of people and it was just a matter of sorting them into true/false, you would have no problem rejecting a false idea even if, on average, the number of false ideas was higher than true ones. The problem is that bullshit arises when someone generates an idea for reasons other than truth, e.g., monetary gain, social standing, self-validation, etc. This motivation gives BS a boost that a simple wrong idea doesn’t ordinarily have.

    I’m generally good at navigating, but sometimes take a wrong turn. Even if I won’t admit it out loud, when I find out I’m wrong, I will correct my course, since my goal is to get to my destination. Bullshit occurs when I arrive at the wrong place and announce that I got where I *really* wanted to go the whole time.

  8. Dzhaughn says:

    What about basic graph-geometric considerations?

    We set off a story on a random traversal through a social network. Then we send out a second story on a similar random walk. Let A be the set of nodes the first story hits in its first N messages. How many messages before we expect the second’s traversal (call it B) reaches 50% of A? Much longer, under mild assumptions about the walks being reasonably varied and not being a large part of the graph.

  9. yyw says:

    Suppose someone published some bullshit that is just statistically significant. To refute it, you probably need a much larger sample size than the original bullshit study.

  10. Anonymous says:

    I think that as a statistician, you should have thought that part of the explanation is because of sampling bias. There is a ton of BS. We don’t have to refute most of it. Most of it never convinces anyone. It is only the BS that sticks that we have to bother refuting. So, is it really true that BS is easier to generate than refute. Possibly, but I think that certain BS sticks for lots of different reasons. It confirms prior prejudices. Some interests spend lots of energy to promote it. It justifies some horrible action many people are taking, etc. It is only that portion of BS that actually sticks and gets propagated that we have to refute. It would be more correct to say that it is easy to generate BS, but only certain types of BS get a large audience for reasons that are hard to replicate but easy to predict. (I can’t make you have a confirmation bias that will lead to accepting my BS, but I can predict your confirmation biases and generate BS that fits them.) Only that BS is hard to refute.

  11. Jonathan says:

    I like the other Jonathan’s answer, and add the issue is the words ‘order of magnitude’ because this problem may in an abstracted form be a decent version of P and NP – haven’t thought it through – because you have an algorithm that claims to be a solution but which must be tested without knowing if it is the solution. It can’t be exact outside the abstracted version because you’re looking for disproof, but that raises other issues. Those seem to meet if you phrase ‘disproof’ as the entire inversion of the solution, so the algorithm can be treated as a true or true enough fit or at least as belonging to the class of fitting algorithms. In any sense, however, I’m unclear what ‘order of magnitude’ might mean in this context. It could mean that you take the solution and expand it to the potential ‘field’ which is solved or which ‘generates’ that solution, but that gets into hand waving too.

  12. Anoneuoid says:

    I don’t think this works out too well for the “refuters”.

    The premise is that it is very difficult to “refute bullshit”. How many people doing the “refuting” actually look into whatever “bullshit” claim well enough before knowing it is bullshit?

  13. Manoel Galdino says:

    Maybe it is related to the difference between engineering and reverse engineering that Talebs mentions in his book? See this for instance:

  14. james says:

    I don’t think it’s actually that asymmetric- I just think the bullshitters crowdsource their efforts. One person says something false, then two people, then several, then a bunch, and all of a sudden the news media and social media and whatever else are hyping about something that took no effort for the original person to say, but was propagated by the cumulative effort of many. Then when a small group of intelligent, capable, and socially conscious individuals want to refute something, it’s up to them to amass a similar amount of effort. But (as someone in the comments mentioned already) the number of people who care about truth is much less than the number willing to spread bullshit, so the per-person effort is asymmetric.

    An example is the anti-vaxxer stuff. Someone just kinda said that, and it all of a sudden became “true” in the sense that people who believed it demanded evidence to refute it, as if just because people were saying it, it somehow should be treated as a perfectly viable theory, even though it came from nothing.

    • Anonymous says:

      Quote from above: “I don’t think it’s actually that asymmetric- I just think the bullshitters crowdsource their efforts”

      I fear some folks are trying to make “crowdsourcing” the new hip thing.

      I have seen, and heard, the term used too many times (at least for my liking) in recent papers and discussions concerning possible “improvements” in psychological science.

      To me, it’s a buzzword which describes a process, and view of science, that can possibly (and will probably in my reasoning) lead to exactly the kind of unscientific things you mention in your comment.

      Also see “Argumentum ad populum” in this regard (

  15. Mikhail Shubin says:

    But could “The bullshit asymmetry principle” be wrong?
    It is wrong, it must be hard to disprove it, thus it is correct.

    • Jack Rekshasa says:

      Clever. I see what you’re doing there…. I think the BSAP is probably right in most civil contexts and less so in the scientific and academic community (but not unheard of). The problem with refuting bull shit, is do you gauge your results based on peoples realignment to the truth or by the completeness of the refutation. If it’s the former, then yeah, the BSAP is certainly a thing.

  16. Kaiser says:

    I have to speak up for the other side. Brandolini’s Law is false.
    Counterexample: it takes mathematicians very little time to shoot down “obviously wrong” claimed proofs of any number of unsolved problems. Many statistical errors are also relatively easy to spot – and surely the researcher spent more time manufacturing the evidence (I’m thinking Wansik).
    Secondly, the claim of an order of magnitude difference is absurd. I just proved my first point.

    • Jeff says:

      I agree that this law doesn’t seem to apply to a mathematical proof, but wouldn’t the Wansink situation be a great supporting example? In that case I think the time to refute the work includes not only the hours spent by academics poking holes in the original papers, but the collective time required to erase conclusions that have entered the conventional wisdom regarding eating habits, some of which have been internalized by people who have never heard of Brian Wansink.

  17. Bill Spight says:

    Taking bullshit to mean propositions advanced without regard for their truth value, IMO the thing to do is not to refute it, but to point it out. Refuting it takes it seriously, which it doe not deserve. (OC, there are exceptions.)

    Personal example: Around 1980 I read about how Social Security was in trouble. Then in Reagan’s first term, it apparently got fixed. Then in Reagan’s second term I started hearing that you couldn’t count on Social Security, so you needed an IRA or other private retirement plan. Then in the 1990s I started hearing that Social Security was going to disappear in our lifetimes. By this time I realized that all of that was bullshit. I could offer refutations, and at this point there are so many people in their 20s and 30s who believe that they will not have Social Security that it could become a self-fulfilling prophecy. So we do need to refute the fear mongering. But IMO the main thing to do is to point out that it is bullshit.

    • Along the lines that Bill noted, I think most of us are pretty clueless as to how the global economy works. More specifically, I hope that the World Bank selects someone like David Kennedy for World Bank Prez who has a background in developmental economics, international law, humanitarian law, the Bank’s regulatory and management policies. His Dark Side of Virtue Reassessing International Humanitarianism is a riveting account of our seemingly noble efforts.

      This account made me curious about measurement more generally. And more fundamentally I speculated that a lot of information out there is not reliable in the contexts in which it is called up.

    • Bill Spight says:

      Having slept on this, I think it worth noting that one of the traditional ways of dealing with bullshit is to point out that the bullshitters have the burden of proof. It is not up to us to refute it. Most bullshit claims are made without proof. As Kaiser and others have indicated, when proofs are actually given, refuting them is relatively easy.

  18. Jordan Anaya says:

    I think this principle is similar to the famous Mark Twain quote:
    “A lie can travel half way around the world while the truth is putting on its shoes.”

    Certain ideas have momentum to them–people are ready to believe them because they want them to be true.

    Look at this recent Science paper which looked at GREs:

    They committed a statistical fallacy, and yet everyone is happy to share the paper because they want to believe standardized tests are useless–likely because they also believe these tests are biased against certain groups.

    The fallacy is best illustrated by this tweet:

    Basically, among students with high test scores the test scores don’t predict outcomes, but I think I can safely predict that someone scoring at the very bottom will struggle in grad school. Similarly, someone below 6 feet will likely struggle in the NBA.

    I think a similar thing can be said about Wansink’s mindless eating idea. People want to believe that losing weight can be accomplished by some simple environment changes and don’t want to hear the hard truth that it actually requires limiting the types of food you eat and exercise.

  19. Jordan Anaya says:

    I just noticed this whole post about this concept. I enjoyed the term “Gish Gallop”, which I’ve apparently Google searched before but not sure where I originally heard it.

  20. Andrew Brown says:

    Does Donald Trump have Morgellons decease ?

    This is all clearly worthy of a study by the late C. Northcote Parkinson.

  21. random_dude says:

    I think a big reason for why this might be true is related to human psychology. It’s much easier for us to learn something than to un-learn that thing.

    Once you are told a fact, if you accept it as true, it will be much more difficult for you to go back on it and decide that it’s not true. This happens even when presented with new evidence that refutes the initial fact.

  22. Eddie says:

    I think we could formulate this as such:

    T = S x 10^b

    b = The amount of *B*ullshit. Bullshit stemming from a one factual misunderstanding is easier to debunk than bullshit based upon a larger numbers of related falsehoods.
    T = How difficult it will be to debunk the bullshit and get to the *T*ruth
    S = The mount of effort it takes to resist *S*lapping the shit out of people

  23. Peter Love says:

    I think perhaps people are taking the word “Law” too literally here, no? This “Law” is in the same category as Murphy’s Law or Godwin’s Law, both of which are in the category of “aphorism”.

  24. Evelio from the Third World says:

    for those of you saying it’s easy to refute bullshit, consider that the amount of hours you need to invest in studying deeply any single subject is vast, in order that you can identify and then refute bullshit.. and any person with no scientific background, can ellaborate any kind of bullshit from scratch in just seconds.. as Columbia students I expected that many of you would understand this easily.. Evelio Ruano from El Salvador

  25. BR41N says:

    While a general observation “There appears to be many times more effort required to refute spurious falsehoods than needed to create them” seems sensible, my distaste at this “law” arises with both its lack of precision (An order of magnitude would be determined an accurate ratio from what method of comparison, measurement, instrumentation, or general analysis?) and its indication of being based in emotion (the swear word in its title.) Emotional cues run contrary to a neutral and dispassionate scientific analysis. Both of these observations, reveal this “law” to me to be what it seems to be objectively: one man becoming emotional watching the flaws of human behavior in a political context who then uses his sense of identity (“I am an ‘intelligent engineer type.'”) to position his own ideas as superior when they themselves contain their own flavor of human error. True scientific analysis would’ve established a baseline rooted in the physics of this world’s nature and applied a logic which uses some observed fact to further identify and explain phenomenon as an extrapolation of what is true. In my opinion, true ideas in analysis are primarily driven by the intent of being wholly accurate in explaining worldly phenomena apart from any other possible motivating human factor.

  26. sr10 says:

    In a tightly closed system, such as among a small population of scientists who accept peer review, you can find counterexamples such as Kaiser cites. Perhaps the gating criteria has to do with the effort required to put the ‘obviously wrong’ material out, such as publishing a proof. However, in a large or open system, where anyone with access can say anything with minimal effort or support, Brandolini’s law stands up rather well. In any environment where a person can make unsubstantiated statements, and those objecting must marshal evidence to refute the statements, you will see this effect. Maybe it is really a substantiation asymmetry principle: Where there is an asymmetry where anyone can say anything with no support and those challenging must support their challenge, bullshit will grow exponentially.

  27. Aspie123 says:

    I like to think about this problem by reframing it in computer security terms. In non-scientific discourse, bullshit is the equivalent of a denial of service attack. Bullshit crowds out legitimate arguments by flooding online spaces with bad faith, easily disprovable ones. Protecting against a denial of service attack usually involves shutting down the attackers. A common solution involves putting up a captcha for the user to prove they are not an automated bot. A similar solution could apply to online disinformation.

    This principle doesn’t just apply to online information. Think about Andrew Wakefield’s study linking autism to vaccines. He published his study in 1998. While the scientific community quickly refuted his claims, they still live on to this day. Consider the 2016 “documentary” Vaxxed. This is a perfect example of “bullshit” because the claims have already been debunked time and time again. It has a runtime of 91 minutes. You wouldn’t need to conduct new scientific research to debunk the claims made in this movie, but it would still be a time-consuming effort to debunk each and every one of the claims. Even if you took the time to do this, you would then need to move on to Vaxxed 2.

    The point is bullshit is easy to create because the creator has no concern for the truth. Simply debunking it has no effect as it is like fighting a hydra of disinformation. Debunk one lie and two more appear to take its place.

Leave a Reply

Where can you find the best CBD products? CBD gummies made with vegan ingredients and CBD oils that are lab tested and 100% organic? Click here.