The bullshit asymmetry principle

Jordan Anaya writes, “We talk about this concept a lot, I didn’t realize there was a name for it.” From the wikipedia entry:

Publicly formulated the first time in January 2013 by Alberto Brandolini, an Italian programmer, the bullshit asymmetry principle (also known as Brandolini’s law) states that:

The amount of energy needed to refute bullshit is an order of magnitude bigger than to produce it.

It became especially popular after a picture of a presentation by Brandolini at XP2014 on May 30, 2014, was posted on Twitter. Brandolini was inspired by reading Daniel Kahneman’s Thinking, Fast and Slow right before watching an Italian political talk show with journalist Marco Travaglio and former Prime Minister Silvio Berlusconi attacking each other. A similar concept, the “mountain of shit theory”, was formulated by the Italian blogger Uriel Fanelli in 2010, roughly stating the same sentence.

Brandolini’s law emphasizes the difficulty of debunking bullshit. In contrast, the faster propagation of bullshit is an old proverb: “a lie is halfway round the world before the truth has got its boots on”.

Two questions then arise:

1. Is this principle true? Or, more specifically, when is it true and when is it not?

2. To the extent that the principle is true, where is it coming from? I can think of a couple theories:

a. Asymmetry in standards of evidence: it’s much easier to suggest that something might be true than to demonstrate conclusively that it’s not the case. For example, consider “cold fusion”: A single experiment with anomalous results got lots of attention, but it took a lot of effort to figure out what went wrong.

b. Ethical asymmetry: The kinds of people who bullshit are more likely to be the kinds of people who misrepresent evidence, avoid correcting their errors, and intimidate dissenters, so at some point the people who could shoot down the bullshit might decide it’s not worth the trouble: Why bother fight bullshit if the bullshitters are going to turn around and personally attack you? From this standpoint, once bullshit becomes “too big to fail,” it can stay around forever.

P.S. In comments, Kaiser writes:

I have to speak up for the other side. Brandolini’s Law is false.

Counterexample: it takes mathematicians very little time to shoot down “obviously wrong” claimed proofs of any number of unsolved problems. Many statistical errors are also relatively easy to spot – and surely the researcher spent more time manufacturing the evidence (I’m thinking Wansink).

Secondly, the claim of an order of magnitude difference is absurd. I just proved my first point.

Good point about Wansink. It took him decades to construct a palace of bullshit. Sure, it took effort for the skeptics to reveal the emptiness of this edifice, but the effort of this debunking was still much less than the effort of the original construction.

54 thoughts on “The bullshit asymmetry principle

  1. I agree with both theories. Here is a third: entropy. There are many more ways to be wrong than to be right. While all (true) data is consistent with the truth, it is also consistent with large swaths of false-space, simply because false-space is so large. Bullshit that sits in a hard-to-refute section of false-space is hard to differentiate from truth, but really easy to find, and therefore defend.

    • As I think abut it, this is sort of a sub-species of your point (a).

      There is another reason. People asserting bullshit have an (undeserved) first-mover advantage. You’ve discussed this before when showing that people have a strong preference for the first article, while the subsequent refutations are just the ravings of small-minded nitpickers.

      • I also see a similarity here to Richard Thalers endowment effect, once people form an opinion based on a “fact” they have been given, they now identify with that piece of information and have placed their perceived value on it which makes it much more difficult to let go of, hence the endowment effect.
        I believe this “bullshit” rule to apply more so to people that are more emotional than logical in their thought process, as I agree, the mathematician and statistician even a lawyer will often look for holes in the information they have been given to ensure they are correct and will go to the ends of the earth to prove it based on evidence and are always open to being wrong. People who see Donald Trump as a messiah, may be harder to persuade.

        • there is an element that you have not considered: honesty!
          Honesty or the lack of it is intrinsical in the nature of BS.
          here is an example:
          ” the border is secure.”
          the effort to prove that this statement is absolute bs, is disproportional
          to the easiness of its formulation.
          such statement requires either a moron or a lair.
          Lately we have seen statistician and lawyers that have chosen the latter
          Assuming that a total moron would not be able to
          get the required degree, stupidity is not necessarily an option.
          Deceit always is.

  2. Quote from above: “The amount of energy needed to refute bullshit is an order of magnitude bigger than to produce it.”

    Funny/interesting sentence in relation to large replication efforts like “Registered Replication Reports”, and the tons of resources that are used (and possibly get wasted by) doing them.

    I reason tons of energy, and resources, possibly get wasted by giving attention (a 2nd time around) to “bad” research, and “bad” researchers.

    I even think these efforts to “correct” matters may actually start the problematic cycle again. For instance, see my comment concerning a recent example of the consequences of such a large replication, and the possible start of the problematic cycle again here https://statmodeling.stat.columbia.edu/2018/11/01/facial-feedback-findings-suggest-minute-differences-experimental-protocol-might-lead-theoretically-meaningful-changes-outcomes/#comment-911003

    (Now, to possible prevent/solve this all, i could only think of the following idea. The idea tries to do many things, one of which is to try and circumvent spending time, and energy, and other resources, on “bad” research and researchers.

    This is attemtped by providing a research and -publication format that tries to be of such high quality, and reinforces the things that make science fun and worthwhile (e.g. theory building). Here is the idea + some thoughts of how it could possible stop spending more resources to “refute bulls!t”: https://statmodeling.stat.columbia.edu/2018/04/20/carol-nickerson-investigates-unfounded-claim-17-replications/#comment-711458)

  3. Do a pubmed.gov search on “Morgellons disease” and you’ll find part of the answer. A misguided attempt by a TV news show to explain how an evidence-free non-disease had generated buzz in some diverticulum of the www managed instead to produce a surge of people discovering strange fibers growing out of their skin. More than a decade later, boots long since laced up, the truth seems if anything to have fallen further behind. Good stories trump good evidence.

    • Sorry but international travelers now know that they should flee any physician who uses the archaic psychoanalytically tinged diagnosis of “Morgellons disease.” The symptoms that are often attributed to this imaginary condition can, with careful interviewing, be related to exposure to scabies now endemic to youth hostels and even fancy hotels. The diagnosis of Morgellons disease leads to prescription of an anti-anxiety or even antipsychotic medication. The appropriate diagnosis of scabies leads to effective treatment with permethrin. “Morgellons disease” has become an example of strongly held archaic views trumping evidence

      • >Sorry but international travelers now know that they should flee any physician

        *ought to know* perhaps but do you have any evidence that a large majority of them *do* know this? I certainly have *never* heard of “Morgellons disease” and so if faced with a doctor telling me about it, I’d certainly have to look it up. If I did so on the internet I wouldn’t get “Morgellons disease is a false diagnosis for scabies infections” I’d get a crapload of controversy, including articles like this: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5038112/ which do in fact suggest that there is a psychological phenomenon that is distinct from scabies infections.

      • Morgellon’s syndrome is just a variation of parasitosis (delusions of infestation) which is fairly common. An affected person will continually pick at the areas where they think they infested ( or have microscopic fibres in their skin) thus producing sores which are then taken as further evidence of the infestation. There should be no confusion with scabies which has a different skin distribution.

        • Apparently I’ve lost the ability to communicate effectively so I’ll try again. In 2018 there were at least a dozen peer reviewed and published articles with “Morgellons disease” in the title. The publication of such articles has been accelerating ever since the media tried to demonstrate that there was no evidence of it existing outside the mind of those who claim to suffer from it. So, *deep breath*, the point is that notwithstanding more than a decade of telling people they don’t really have fibers growing out of their skin, of showing them that what they’re pulling at are just bits of shirts or carpet rubbed into their skins, the number of sufferers only grows (now worldwide) and the number of people publishing articles about this non-existent disease similarly grows. I found this new letter to the editor sorta helpful: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5929958/

    • Hey, your right! Today my (reputable) metropolitan paper had an article that included the line “…exotic diseases such as Ebola or Morgellons”.
      You can’t get a much more *real* disease than Ebola – known pathogen, unequivocal clinical findings, fatal consequences – and here it is bracketed with a pseudo-disease.

  4. Bullshit is characterized by not caring whether an argument is true. If you are happy to refute bullshit without caring whether your counterarguments are valid, then the asymmetry goes away.

  5. Clearly we need to develop a function for the amount of energy that is expended to refute the bullshit. Perhaps something like:
    (amount expended to produce BS)*(2+[1 if TED talk is given on topic]+[1 if published in Science, Nature or PNAS]+{1 if championed by someone in Ivy Leagues]+[1 if featured on NPR]+{1 if resulting in NYT bestselling book].

    Of course at some point this amount crosses some sort of Schwarzschild radius of bullshit and the “theory” becomes a vampire theory that cannot be killed by mere evidence.

  6. Connected with (b) is that there is usually a reason that bullshit exists that is independent of its relationship with truth. After all, if ideas were generated independently of people and it was just a matter of sorting them into true/false, you would have no problem rejecting a false idea even if, on average, the number of false ideas was higher than true ones. The problem is that bullshit arises when someone generates an idea for reasons other than truth, e.g., monetary gain, social standing, self-validation, etc. This motivation gives BS a boost that a simple wrong idea doesn’t ordinarily have.

    I’m generally good at navigating, but sometimes take a wrong turn. Even if I won’t admit it out loud, when I find out I’m wrong, I will correct my course, since my goal is to get to my destination. Bullshit occurs when I arrive at the wrong place and announce that I got where I *really* wanted to go the whole time.

  7. What about basic graph-geometric considerations?

    We set off a story on a random traversal through a social network. Then we send out a second story on a similar random walk. Let A be the set of nodes the first story hits in its first N messages. How many messages before we expect the second’s traversal (call it B) reaches 50% of A? Much longer, under mild assumptions about the walks being reasonably varied and not being a large part of the graph.

  8. Suppose someone published some bullshit that is just statistically significant. To refute it, you probably need a much larger sample size than the original bullshit study.

  9. I think that as a statistician, you should have thought that part of the explanation is because of sampling bias. There is a ton of BS. We don’t have to refute most of it. Most of it never convinces anyone. It is only the BS that sticks that we have to bother refuting. So, is it really true that BS is easier to generate than refute. Possibly, but I think that certain BS sticks for lots of different reasons. It confirms prior prejudices. Some interests spend lots of energy to promote it. It justifies some horrible action many people are taking, etc. It is only that portion of BS that actually sticks and gets propagated that we have to refute. It would be more correct to say that it is easy to generate BS, but only certain types of BS get a large audience for reasons that are hard to replicate but easy to predict. (I can’t make you have a confirmation bias that will lead to accepting my BS, but I can predict your confirmation biases and generate BS that fits them.) Only that BS is hard to refute.

  10. I like the other Jonathan’s answer, and add the issue is the words ‘order of magnitude’ because this problem may in an abstracted form be a decent version of P and NP – haven’t thought it through – because you have an algorithm that claims to be a solution but which must be tested without knowing if it is the solution. It can’t be exact outside the abstracted version because you’re looking for disproof, but that raises other issues. Those seem to meet if you phrase ‘disproof’ as the entire inversion of the solution, so the algorithm can be treated as a true or true enough fit or at least as belonging to the class of fitting algorithms. In any sense, however, I’m unclear what ‘order of magnitude’ might mean in this context. It could mean that you take the solution and expand it to the potential ‘field’ which is solved or which ‘generates’ that solution, but that gets into hand waving too.

  11. I don’t think this works out too well for the “refuters”.

    The premise is that it is very difficult to “refute bullshit”. How many people doing the “refuting” actually look into whatever “bullshit” claim well enough before knowing it is bullshit?

      • …hard, if not impossible, to prove… it’s not so hard to demonstrate that the evidence presented in favor of that claim is extremely weak.

        This sounds like the opposite of the BS-asymmetry principle. I agree that it is easier to find a flaw than to “prove” something is correct.

    • Well, if they don’t look into it then they are engaging in bullshit, as well. However, the act of “looking into it” should be designed from the get go to determine it’s truth or validity, if any.

  12. I don’t think it’s actually that asymmetric- I just think the bullshitters crowdsource their efforts. One person says something false, then two people, then several, then a bunch, and all of a sudden the news media and social media and whatever else are hyping about something that took no effort for the original person to say, but was propagated by the cumulative effort of many. Then when a small group of intelligent, capable, and socially conscious individuals want to refute something, it’s up to them to amass a similar amount of effort. But (as someone in the comments mentioned already) the number of people who care about truth is much less than the number willing to spread bullshit, so the per-person effort is asymmetric.

    An example is the anti-vaxxer stuff. Someone just kinda said that, and it all of a sudden became “true” in the sense that people who believed it demanded evidence to refute it, as if just because people were saying it, it somehow should be treated as a perfectly viable theory, even though it came from nothing.

    • Quote from above: “I don’t think it’s actually that asymmetric- I just think the bullshitters crowdsource their efforts”

      I fear some folks are trying to make “crowdsourcing” the new hip thing.

      I have seen, and heard, the term used too many times (at least for my liking) in recent papers and discussions concerning possible “improvements” in psychological science.

      To me, it’s a buzzword which describes a process, and view of science, that can possibly (and will probably in my reasoning) lead to exactly the kind of unscientific things you mention in your comment.

      Also see “Argumentum ad populum” in this regard (https://en.wikipedia.org/wiki/Argumentum_ad_populum)

    • Clever. I see what you’re doing there…. I think the BSAP is probably right in most civil contexts and less so in the scientific and academic community (but not unheard of). The problem with refuting bull shit, is do you gauge your results based on peoples realignment to the truth or by the completeness of the refutation. If it’s the former, then yeah, the BSAP is certainly a thing.

  13. I have to speak up for the other side. Brandolini’s Law is false.
    Counterexample: it takes mathematicians very little time to shoot down “obviously wrong” claimed proofs of any number of unsolved problems. Many statistical errors are also relatively easy to spot – and surely the researcher spent more time manufacturing the evidence (I’m thinking Wansik).
    Secondly, the claim of an order of magnitude difference is absurd. I just proved my first point.

    • I agree that this law doesn’t seem to apply to a mathematical proof, but wouldn’t the Wansink situation be a great supporting example? In that case I think the time to refute the work includes not only the hours spent by academics poking holes in the original papers, but the collective time required to erase conclusions that have entered the conventional wisdom regarding eating habits, some of which have been internalized by people who have never heard of Brian Wansink.

      • In my view, many of the “errors” were elementary and easily spotted. Huge amounts of energy spent was on convincing the relevant powerful people to take action, who prefer to look the other way.

    • Perhaps for the mathematician or an expert this is the case, but not for the layperson who is reading (and reposting) a faulty proof published by a troll or meme creator. To give an example, people look at the composition of the atmosphere of Mars and compare CO2 “percentages” to the Earth and then look at the respective temperature ranges on both Mars and the Earth and summarily conclude that Mars is cooler (with 90% CO2) implies that CO2 does not cause warming. Any casual reader who knows the respective densities of the two atmospheres can give an answer, but try to convince a layperson who has already taken these two datapoints (CO2 percentage and temp ranges between two planetary atmospheres) as proof against Earth climate change–you have to inform them of other datum (atmospheric density, specific heat, etc) and then try to undo the faulty reasoning…this is way more difficult than the time it took for a person’s System 1 to accept a faulty conclusion constructed by another troll.

    • It is easy to point out the flaws in the original analysis. The correction of the analysis thru the scientific literature may take some time. In that time gap the flawed analysis is promoted as truth. It becomes ingrained in the narrative of the debate on the topic. When the corrections appear, scientists, sophisticated users of statistics and the honest lay public adjust their thinking. The average person on the street who was motivated to read about a topic that had interesting, new findings, flawed that they were, are less motivated to re-adjust their thinking when the debunking is made public. They average person may not be aware of the debunking. Penetrating the bubble the average person and re-educating them takes effort This is exacerbated by the promoters of misinformation.

      Bad actors are promoting the results of poor statistical analysis for ends of their own. One does not have to look far to find such examples. Kennedy and Anti-vaxxers. The Center for Countering Digital Hate claims that 12 sites/channels/tweet-streams account for two-thirds of the misinformation about vaccines. Kennedy, a US politician, is one of those 12 sources. These promoters know that the original analysis was debunked. They have all sorts of excuses to refute the refutation :debunking was by industry scientists that were protecting their products, by scientists who were embarrassed by the original findings, by ‘bad’ people, etc. What is the average person supposed to think? Sprinkle a little conformational bias and Dunning-Krueger effect into the mix and people become resistant to the new information. This is the opposite of thinking, it is doubling down on believing the lie. Many studies have support the active rejection of new results that conflict with a persons world view. This is where we see Brandolini’s Law. You need at least an order of magnitude effort more to disprove the BS. As an aside, making up BS is much easier than proving that the BS is BS. Facts and data require more effort that lies and fantasies.

      This is seen everywhere. BS on firearm safety, economic impact of immigration, impact of Obamacare, impact of Romneycare, voter fraud, race and criminal activity, etc. continues to thrive despite the debunking of the original statistical analyses on which the BS rests. Once the misinformation gains sufficient momentum, it is all but impossible to stop it. Brandolini may have to issue a corollary: Obviously wrong claims that become axioms for hate groups require 2 orders of magnitude more effort.

  14. Taking bullshit to mean propositions advanced without regard for their truth value, IMO the thing to do is not to refute it, but to point it out. Refuting it takes it seriously, which it doe not deserve. (OC, there are exceptions.)

    Personal example: Around 1980 I read about how Social Security was in trouble. Then in Reagan’s first term, it apparently got fixed. Then in Reagan’s second term I started hearing that you couldn’t count on Social Security, so you needed an IRA or other private retirement plan. Then in the 1990s I started hearing that Social Security was going to disappear in our lifetimes. By this time I realized that all of that was bullshit. I could offer refutations, and at this point there are so many people in their 20s and 30s who believe that they will not have Social Security that it could become a self-fulfilling prophecy. So we do need to refute the fear mongering. But IMO the main thing to do is to point out that it is bullshit.

    • Along the lines that Bill noted, I think most of us are pretty clueless as to how the global economy works. More specifically, I hope that the World Bank selects someone like David Kennedy for World Bank Prez who has a background in developmental economics, international law, humanitarian law, the Bank’s regulatory and management policies. His Dark Side of Virtue Reassessing International Humanitarianism is a riveting account of our seemingly noble efforts.

      https://press.princeton.edu/titles/7711.html

      This account made me curious about measurement more generally. And more fundamentally I speculated that a lot of information out there is not reliable in the contexts in which it is called up.

    • Having slept on this, I think it worth noting that one of the traditional ways of dealing with bullshit is to point out that the bullshitters have the burden of proof. It is not up to us to refute it. Most bullshit claims are made without proof. As Kaiser and others have indicated, when proofs are actually given, refuting them is relatively easy.

  15. I think this principle is similar to the famous Mark Twain quote:
    “A lie can travel half way around the world while the truth is putting on its shoes.”

    Certain ideas have momentum to them–people are ready to believe them because they want them to be true.

    Look at this recent Science paper which looked at GREs:
    https://twitter.com/kph3k/status/1088910831126044672

    They committed a statistical fallacy, and yet everyone is happy to share the paper because they want to believe standardized tests are useless–likely because they also believe these tests are biased against certain groups.

    The fallacy is best illustrated by this tweet:
    https://twitter.com/3rdreviewer/status/840266678076350464

    Basically, among students with high test scores the test scores don’t predict outcomes, but I think I can safely predict that someone scoring at the very bottom will struggle in grad school. Similarly, someone below 6 feet will likely struggle in the NBA.

    I think a similar thing can be said about Wansink’s mindless eating idea. People want to believe that losing weight can be accomplished by some simple environment changes and don’t want to hear the hard truth that it actually requires limiting the types of food you eat and exercise.

  16. I think a big reason for why this might be true is related to human psychology. It’s much easier for us to learn something than to un-learn that thing.

    Once you are told a fact, if you accept it as true, it will be much more difficult for you to go back on it and decide that it’s not true. This happens even when presented with new evidence that refutes the initial fact.

  17. I think we could formulate this as such:

    T = S x 10^b

    b = The amount of *B*ullshit. Bullshit stemming from a one factual misunderstanding is easier to debunk than bullshit based upon a larger numbers of related falsehoods.
    T = How difficult it will be to debunk the bullshit and get to the *T*ruth
    S = The mount of effort it takes to resist *S*lapping the shit out of people

  18. I think perhaps people are taking the word “Law” too literally here, no? This “Law” is in the same category as Murphy’s Law or Godwin’s Law, both of which are in the category of “aphorism”.

  19. for those of you saying it’s easy to refute bullshit, consider that the amount of hours you need to invest in studying deeply any single subject is vast, in order that you can identify and then refute bullshit.. and any person with no scientific background, can ellaborate any kind of bullshit from scratch in just seconds.. as Columbia students I expected that many of you would understand this easily.. Evelio Ruano from El Salvador

  20. While a general observation “There appears to be many times more effort required to refute spurious falsehoods than needed to create them” seems sensible, my distaste at this “law” arises with both its lack of precision (An order of magnitude would be determined an accurate ratio from what method of comparison, measurement, instrumentation, or general analysis?) and its indication of being based in emotion (the swear word in its title.) Emotional cues run contrary to a neutral and dispassionate scientific analysis. Both of these observations, reveal this “law” to me to be what it seems to be objectively: one man becoming emotional watching the flaws of human behavior in a political context who then uses his sense of identity (“I am an ‘intelligent engineer type.'”) to position his own ideas as superior when they themselves contain their own flavor of human error. True scientific analysis would’ve established a baseline rooted in the physics of this world’s nature and applied a logic which uses some observed fact to further identify and explain phenomenon as an extrapolation of what is true. In my opinion, true ideas in analysis are primarily driven by the intent of being wholly accurate in explaining worldly phenomena apart from any other possible motivating human factor.

  21. In a tightly closed system, such as among a small population of scientists who accept peer review, you can find counterexamples such as Kaiser cites. Perhaps the gating criteria has to do with the effort required to put the ‘obviously wrong’ material out, such as publishing a proof. However, in a large or open system, where anyone with access can say anything with minimal effort or support, Brandolini’s law stands up rather well. In any environment where a person can make unsubstantiated statements, and those objecting must marshal evidence to refute the statements, you will see this effect. Maybe it is really a substantiation asymmetry principle: Where there is an asymmetry where anyone can say anything with no support and those challenging must support their challenge, bullshit will grow exponentially.

  22. I like to think about this problem by reframing it in computer security terms. In non-scientific discourse, bullshit is the equivalent of a denial of service attack. Bullshit crowds out legitimate arguments by flooding online spaces with bad faith, easily disprovable ones. Protecting against a denial of service attack usually involves shutting down the attackers. A common solution involves putting up a captcha for the user to prove they are not an automated bot. A similar solution could apply to online disinformation.

    This principle doesn’t just apply to online information. Think about Andrew Wakefield’s study linking autism to vaccines. He published his study in 1998. While the scientific community quickly refuted his claims, they still live on to this day. Consider the 2016 “documentary” Vaxxed. This is a perfect example of “bullshit” because the claims have already been debunked time and time again. It has a runtime of 91 minutes. You wouldn’t need to conduct new scientific research to debunk the claims made in this movie, but it would still be a time-consuming effort to debunk each and every one of the claims. Even if you took the time to do this, you would then need to move on to Vaxxed 2.

    The point is bullshit is easy to create because the creator has no concern for the truth. Simply debunking it has no effect as it is like fighting a hydra of disinformation. Debunk one lie and two more appear to take its place.

  23. I hardly think that the mathematical and statistical examples are apropos. The example given in the original is on politics, and by now we’ve seen entire galaxies of bullshit related to covid. It’s in the human sciences that the bullshitter can become like the Terrible Trivium from the Phantom Tollbooth. Once Milo & his companions could clearly see the logic of this monster, they just moved on. I’m the online environment, block and move on. Imagine the importance of the bullshitter if that was common practice.

  24. Kaiser’s comment actually illustrates the principle well. First he picks one single narrow counterexample – countering by pointing out it’s not reflective of the majority of bullshit requires much more effort, because one would need to come up with multiple examples vs his one.

    And then his second statement is just “I proved it. Other side absurd. So there.” Anything I could possibly try and counter that with would take more effort, except to just counter with “You didn’t. Me right. You absurd.”

Leave a Reply to Jonathan (another one) Cancel reply

Your email address will not be published. Required fields are marked *