Skip to content
 

Given that 30% of Americans believe in astrology, it’s no surprise that some nontrivial percentage of influential American psychology professors are going to have the sort of attitude toward scientific theory and evidence that would lead them to have strong belief in weak theories supported by no good evidence.

Fascinating article by Christine Smallwood, “Astrology in the age of uncertainty”:

Astrology is currently enjoying a broad cultural acceptance that hasn’t been seen since the nineteen-seventies. The shift began with the advent of the personal computer, accelerated with the Internet, and has reached new speeds through social media. According to a 2017 Pew Research Center poll, almost thirty per cent of Americans believe in astrology. . . .

In its penetration into our shared lexicon, astrology is a little like psychoanalysis once was. At mid-century, you might have heard talk of id, ego, or superego at a party; now it’s common to hear someone explain herself by way of sun, moon, and rising signs. It’s not just that you hear it. It’s who’s saying it: people who aren’t kooks or climate-change deniers, who see no contradiction between using astrology and believing in science. . . .

I did a quick search and indeed found this Pew report from October, 2018:

The only real surprise about this table to me was the religious breakdown. I had the vague sense of mainline Protestants as being the sensible people, but they have the same rate of believe in astrology as the general population. But, hey, I guess they’re normal Americans (on average) so they have normal American beliefs. Also surprising that only 3% of atheists believe in astrology. I guess this makes sense, but it somehow seemed plausible to me for someone to not believe in God but believe in other supernatural things: indeed, I could imagine astrology as a sort of substitute for a traditional religious system. But I guess not.

Also interesting when we think about the promoters of junk science in academia.

I’ve analogized Brian Wansink to an astrologer who can make savvy insights about the world based on some combination of persuasiveness and qualitative understanding of the world, and then attribute his success to tarot cards or tea leaves rather than to a more prosaic ability to synthesize ideas and come up with good stories.

But does Brian Wansink actually believe in astrology? What about Marc Hauser, Ed Wegman, Susan Fiske, and the whole crowd of people who like to label their critics as “second-string, replication police, methodological terrorists, Stasi,” etc? I doubt they believe in astrology, as that represents a competing belief system: it’s an industry that, in some sense is an alternative to rah-rah Ted-talk science. I wouldn’t be surprised if prominent ESP researchers believe in astrology, but I also get the sense that mainstream junk-science promoters in academia and the news media don’t like to talk about ESP, as those research methods are uncomfortably close to theirs. They don’t want to endorse ESP researchers, as that would discredit their own work by association, but they don’t want to throw them under the bus, either, as they are fellow Ivy League academics, so their safest strategy is just keep to quiet about that stuff.

The larger point, though, is not belief in astrology per se, but the state of mind that allows people to believe in something so contradictory to our scientific understanding of the world. (OK, I apologize to the 29% of you who are not with me on this one. You can return to the fold when I go back to posting on statistical graphics, model checking, Bayesian computation, Jamaican beef patties, etc.)

It’s not that, a priori, astrology couldn’t be true: As with embodied cognition, beauty and sex ratio, ovulation and voting, air rage, ages ending in 9, and all the other Psychological Science / PNAS classics, we can come up with reasonable theories under which astrology is real and spectacular—it’s just that after years of careful study, nothing much has come up. And the possible theories out there aren’t really so persuasive: they’re bank-shot models of the world that could be fine if the goal was to gain understanding of a real and persistent phenomenon, but not so convincing without the empirical evidence.

Anyway, the point is that if 30% of Americans are willing to believe this sort of thing, it’s no surprise that some nontrivial percentage of influential American psychology professors are going to have the sort of attitude toward scientific theory and evidence that would lead them to have strong belief in weak theories supported by no good evidence. Indeed, not just support for particular weak theories, but support for the general principle that we should be nice to pseudoscientific theories (although, oddly enough, maybe not for astrology itself).

P.S. In defense of the survey respondents (though not of the psychology professors who support ideas such as the “critical positivity ratio” which make astrology look positively sane in comparison), belief in astrology (or, for that matter, belief in heaven, the law of gravity, or the square-cube law) is essentially costless. Why not believe, or disbelieve, these things? In contrast, belief or disbelief in evolution or climate change or implicit bias has potential social or political effects. Some beliefs are essentially private, while others have more direct policy implications.

I have less tolerance for prominent academic and media figures who aggressively support junk science when they don’t just express their belief in speculative theories supported by no real data, but then go on the attack against people who point out these emperors’ nudity. In addition, even a hypothetical tolerant, open-minded supporter of junk science—the sort of person who might believe in critical positivity ratio but also actively support the publication of criticisms of that work—can still do a certain amount of damage by diluting scientific journals and the news media with bad science, and by promoting sloppy work which reduces space for more careful research.

You know how they say that science is self-correcting, but only because people are willing to self-correct? Similarly, Gresham’s law is real, but only because people are willing to circulate counterfeit bills, or to circulate money they suspect may be counterfeit, while keeping their mouths shut until they can get rid of their wads of worthless stock.

P.P.S. Just to be clear: No, I don’t know that astrology is a waste of time, and it could be that Marc Hauser was onto something real, even while he was fabricating data (according to the U.S. government, as quoted on Wikipedia), and the critical positivity ratio and ovulation and voting and all the rest . . . all these could be real—who knows! Just cos there’s no good evidence for a theory, that doesn’t make it false. I don’t want to suppress any of these claims. Publish it all somewhere, along with all the criticism of it. My problem with the promoters of junk science is not just that they promote science that I and others consider to be junk—we can be wrong!—but that they continually dodge, suppress, and fight against legitimate open criticism.

P.P.P.S. Again, #notallpsychologists. And of course the problem of junk science is not restricted to psychology, not at all. To the extent that professors of political science, economics, sociology, history, are strong believers in astrology or spoon bending or whatever (that is, belief in “scientific” paranormalism as describing some true thing about the natural world, not just an “anthropological” recognition that paranormal beliefs are something that can affect the world because people believe in it), this could screw up their research too. If a physicist or chemist believes in these things, I guess it’s not such a big deal.

And, again, I’m not trying to suppress research into astrology, embodied cognition, ESP, beauty-and-sex-ratio, bottomless soup bowls, spoon bending, Bible Code, air rage, ovulation and voting, subliminal smiley faces, etc etc. Let a thousand flowers bloom! The point of this post is that, given that there’s a large chunk of the population that’s willing to believe in scientific-sounding theories that are not backed by any strong scientific theory or evidence, it should be no surprise that many professional scientists have this attitude. The consequences happen to show up particularly strongly in psychology, as this is a important field of study where theories can be vague and where there’s a long tradition of belief and action backed up by shaky data. That doesn’t mean that psychologists are bad people; they’re just working on hard problems, in an academic tradition that has a lot of failures in its history. Again, this is not a criticism, it’s just the way it is. And of course there’s a lot of great work being done in psychology. You have to work with the history you have.

151 Comments

  1. I’ve been noticing a lot in the age of coronavirus that lots of people sought out by the news media as experts are using phrases like “there is no evidence that this virus transmits by aerosol” or “there is no evidence that pets pose a risk of coronavirus” or “there is no evidence that wearing a cloth mask slows the spread of coronavirus” as code works for “you are stupid and reactionary if you act like these might be true”.

    I’m finding that extremely irritating. When people have spent decades looking for evidence of ESP the absence of evidence is in fact evidence of absence. On the other hand when noone has had time to do ANY relevant research yet… the absence of evidence is just evidence that everyone is really busy trying not to die or let their kids starve or get sick…

    I blame “evidence based medicine” for this crap. it is now firmly on my shit list.

    • jim says:

      When people use the term “evidence-based” all they’re doing is declaring “I won’t accept claims to the contrary”

      Everything everyone does every day is “evidence-based”. Nothing says it’s good evidence. Nothing says it’s valid evidence, nothing says it’s accurate evidence, and nothing says the weight of all evidence favors their position. Claims can be “evidence-based” and just as wrong and fallacious as any other claim. Many people would probably claim they see “evidence” for supernatural events, and that’s why they believe in supernatural things.

      And after all that, even if all your evidence is accurate and valid, and even if your model is internally perfect, your claim can still be terribly wrong if your model is based on a few mistaken assumptions.

      Tragically, it all has to be right to make one’s claim true. “Evidence-based” and “True” are light-years apart.

    • Anoneuoid says:

      Just go back and read what the WHO proclaimed. There’s no evidence for human to human transmission, there’s no evidence for nosocomial transmission, there’s no evidence for international transmission, etc, etc.

      These people are mindless. That’s why I say use science based medicine instead. Evidence based medicine seems to disallow common sense.

      • “there is no evidence against human to human transmission, there’s no evidence against nosocomial transmission, there’s no evidence against international transmission”

        I just want to shake the “there is no evidence” people until they wake up… It really is problematic in the same way that NHST puts the null hypothesis on a pedestal: “if you can’t knock this down, it must be true”

        these people need to be sprayed with silly string each time they say this stuff.

        • Martha (Smith) says:

          “these people need to be sprayed with silly string each time they say this stuff.”

          Is this the updated version of “lashed with a wet noodle”?

        • Austin Fournier says:

          Not that it’s especially relevant to the medical discussion, but I’ve been equally annoyed by the opposite with NHST studies – that only statistical significance counts as evidence for anything, so non-significant studies don’t count. If statistical significance is evidence against the null hypothesis, failure to find it must be evidence for it. Just not necessarily strong evidence, or strong enough to overcome the prior probabilities involved.

          Also not to say that the binary variable of significant/non-significant is a very efficient thing to look at if you want to understand the evidence. P-values are better, and I imagine in some cases you need the original data to extract Bayesian evidence from a study done using NHST.

    • Paul Hayes says:

      I blame “evidence based medicine” for this crap.

      I would blame “NHST-speak”. Medicine is just another victim.

      • Anoneuoid says:

        Thanks, I didn’t realize that term was already being used:

        Good science is the best and only way to determine which treatments and products are truly safe and effective. That idea is already formalized in a movement known as evidence-based medicine (EBM). EBM is a vital and positive influence on the practice of medicine, but it has limitations and problems in practice: it often overemphasizes the value of evidence from clinical trials alone, with some unintended consequences, such as taxpayer dollars spent on “more research” of questionable value. The idea of SBM is not to compete with EBM, but a call to enhance it with a broader view: to answer the question “what works?” we must give more importance to our cumulative scientific knowledge from all relevant disciplines.

        https://sciencebasedmedicine.org/about-science-based-medicine/

      • Anoneuoid says:

        Too bad the people using this term are ignorant of the science and evidence regarding IV vitamin C:
        https://sciencebasedmedicine.org/naturopaths-exploit-covid-19-with-unproven-treatments-and-self-promotion/

        • Anoneuoid says:

          Speaking of which, this was that recent iv vitamin c trial for ARDS where the patients started dying when they stopped the treatment after 4 days:

          What is critical to appreciate is that mortality is important not just as an endpoint itself, but also because of its effect on the primary endpoint. The results showed a marked difference in mortality after 96 hours (5% in the vitamin C group vs. 23% in the control group) at which time the SOFA score was calculated. Since patients who died were not included in the primary endpoint, the change in SOFA score had eliminated the sickest 18% of patientsfrom the control group (survivorship bias). These patients whodied early would have had worsening organ failure and high SOFAscores. The true benefit of vitamin C for the primary endpoint (anorgan failure score at 96 hours) might have been dramatically underestimated

          https://www.ncbi.nlm.nih.gov/pubmed/31785700

          Yep, mindless application of statistics again.

        • Anoneuoid says:

          And youtube is deleting videos discussing how China is giving patients IV vitamin C and doctors oral megadoses:
          https://mobile.twitter.com/DrRichardCheng1/status/1248973099938111488

    • Rahul says:

      Daniel:

      But I think some criticisms forget that public policy needs not just science:

      Take the topic of masks in the general population. As a policymaker say I am aware that mask usage by lay population will reduce risk.

      But I am also aware that there aren’t enough masks for everyone and then theres a lot more utility at rationing them out to say frontline workers.

      Now here’s the dilemma: faced with a population that will hoard what policy prescription do I issue?

      That’s the distinction that critics forget. The seemingly irrational advice may not stem out of scientific ignorance always. Sometimes there’s just pragmatism, reality and all the messy things out there.

      • No, but there have been calls that “there is no evidence that homemade masks reduce the spread of covid-19” you can’t possibly argue that homemade masks are a limited resource and we have to lie to the public about them in order to conserve them for hospital workers.

        Everyone already knew that N95 masks help people not breath in as much virus. The only logical way to handle that was to say that N95 respirators must be conserved for people experiencing the highest viral loads, and that we don’t expect viral load to be high when walking your dog or whatnot.

        And then, of course, a couple weeks later, the CDC and others come out saying that everyone should wear homemade masks, because gee we were wrong… and the way they were wrong is that they had a crappy model of what the homemade mask does… it does what Phil complained we should be doing about cleaning plastic from the ocean. It’s an inefficient filter of a HIGH concentration source… which is way better than an efficient filter at a low concentration source. It also reduces fluid energy to reduce the spread of virus over longer distances (sneezes, coughs, heavy breathing, even normal breathing etc)

        • Rahul says:

          Agree with the homemade face masks bit.

          Ok second critique: as a society we have glorified ideation and hence we have second rate people staff the execution side of things. Career beauraucrats who probably have risen to the limit of their incompetence.

          So I think we as academics need to share a part of this blame when we point fingers at the agencies. We have too many career academics that have never been in execution roles. It wasn’t so say in the WW2 era. In engineering schools too we used to have most professors who had been practicing engineers. It’s way easier for such people to translate good ideas into successful solutions.

          Problem now is we have too many people complaining that great ideas eg pooled testing etc. are not getting implemented. Sure they won’t: We have just wrongly incentivized the whole system so that smart people don’t go into the execution side of matters.

          Scaling up a good idea into actual practice needs a lot of skills and qualities that modern academia has de-emphasized nay even trivialized. I’m talking of stuff like leadership, networks, convinving people, having an address book of people that can build your supply chain in a crisis etc.

          • Andrew says:

            Rahul:

            I think there are a lot of excellent people working at places like the U.S. Census. They don’t get the glory so you don’t hear about them. They’re not building an A.I. to identify gay faces; instead they’re just working really hard to get the numbers right on a bunch of official surveys. When it comes to thinking clearly, these people are a lot smarter than many of the scientists we hear about from NPR ant Ted talks.

            • Rahul says:

              Indeed! But we need more of them, in places like CDC and FDA too.

              I’m just taking a pragmatic approach: we keep reading about great ideas which are not getting implemented.

              I think we have skewed the allocation of good people to one part of the pipeline.

            • Rahul says:

              …and that’s exactly what I am saying: the system needs to change so that there needs to be more glory in execution.
              So we get better people there.

              • Andrew says:

                Rahul,

                Sure, more better people would help. But there’s a big problem when the work of the current better people is not respected, so you have people like Cass Sunstein writing op-eds about glamorous useless social science experiments instead of writing op-eds about the smart people at the census. Or that guy in the U.K government who made a big fuss a few months about replacing tired bureaucrats with fresh faces from outside.

                A problem with the “we need smarter people” attitude is that this approach often seems to (a) ignore the smart people who are already there, and (b) attempt to replace them with stupid people who have some glamour.

              • Martha (Smith) says:

                +1 to Andrew’s response to Rahul

          • jim says:

            “we have glorified ideation and hence we have second rate people staff the execution side of things”

            I don’t think this is true. In the regulatory environment I see a lot of people and agencies struggling to deal with a variety of conflicting directives and interests. It’s a very challenging job. Everyone’s complaining now about getting equipment through the process and onto the streets but forgetting that all these roadblocks have been built intentionally for the exact purpose of preventing shortcuts. I guarantee you that some people complaining right now about equipment and supplies were complaining about companies taking shortcuts and building road blocks to prevent it just a few months ago, playing whack-a-mole trying to shut off every avenue for shortcuts.

            The mask thing is a different story. The bizarre reticence of the health profession to advise the public to wear masks comes from all across the profession, not just the Fed bureaucracy. It seems to stem from the kind of “The-Public-Is-Stupid” mentality that inspired Cass Sunstein’s article more than from any consideration of the public health effects.

            • I’ve been pretty vocally anti excess regulation/hoops for a long time. I think some regulation is important, but for the most part in todays world it’s really regulatory capture = benefits to large corporations who have armies of compliance officers… it’s basically a cost of doing business and totally worth it to them because of the ability to squash competition. Martin Shkreli’s hijacking of whatever that anti-parasite drug was just the logical endpoint of that

              • jim says:

                “really regulatory capture = benefits to large corporations who have armies of compliance officers”

                Yes indeed! well said.

                It functions as a barrier to entry for competition, and it actually protects shitty companies that have the regulatory process established. So the existing powers – Regs and COs – conspire to maintain the status quo.

                Obviously we need regulation. I think the goal though is to make it simple and strong – cheap to follow – rather than complex and weak – and expensive to follow.

            • Rahul says:

              @jim

              Re the regulatory environment:

              I see two perspectives about it, both true to some extent: (a) You are right about the roadblocks being there for a reason. For a lot of the recent past we, as a society, have been used to a relatively safe, pathologically static and low risk environment. The checks and balances we put in place reflect that reality and tread with extreme caution with anything new or different. First do no harm and all that.

              Now, covid upturns that baseline low-risk environment. Suddenly we have a novel, global risk of magnitude never seen before. The optimal level of risk-taking in interventions to address such a problem needs to be grossly different. Think of the sort of experimentation we were doing around the WW2 era.

              That’s what we are struggling with. What we need is regulatory nimbleness. Different foes need different regulatory standards. Not easily done, but needs to be.

              (b) The other point is, that sadly, a lot of what we perceive as regulatory safety nets have been turned into paper-pushers doing lip service to the concept and just generating loads of paper to save their ass in case shit hits the fan. The truly novel thinkers are few, what we have is an army of automatons who tick the boxes. With very little experience of personal initiative or discretion.

              This is from my anecdotal experiences working with FDA & EPA agencies.

              • jim says:

                “a lot of what we perceive as regulatory safety nets have been turned into paper-pushers doing lip service to the concept and just generating loads of paper to save their ass in case shit hits the fan. “

                Yeah, to some extent that seems true. I have some experience with it as well but sometimes it’s hard to tell what is regulation – imposed by, say, FDA – and what is “perceived regulation” – imposed by companies’ perception of what FDA wants.

                One problem I’ve seen is that people are simply afraid to work with the detail in regulatory language. If there’s a paragraph that describes a nuanced understanding of how the reg should be applied, people want to play it safe and stick to the topic sentence.

                This paragraph:
                “Bligs and blugs are generally undesirable. However, under condition A, bligs are necessary and useful, and under condition B, blugs are frequently useful, so under A or B or similar conditions, bligs and blugs can be used.”

                Is interpreted as:
                “Bligs and blugs are illegal. Never use bligs and blugs. Do not speak of bligs and blugs”

              • Martha (Smith) says:

                Rahul and Jim both make good points here.

    • More Anonymous says:

      Daniel, I share your frustration, but disagree with these kind of arguments against evidence-based medicine.

      Like all large movements based on simple principles, the ideas of evidence-based medicine shouldn’t be understood as “universal truths,” but rules of thumb that work well in a particular context. Frankly, the context where evidence-based medicine is important is “Doctors think any new treatment they develop saves lives, even when n equals 3. Also, the giant corporations that test these treatments have few morals and endless money. Therefore, medicine will be dominated by profitable snake oils unless we say ‘absence of RCT evidence is evidence of absence.'”

      COVID-19 is mostly very far from this context, so rules of thumb from evidence-based medicine can work terribly, and the people who continue to depend on them can actively hurt progress in exactly the ways you mention. This is compounded by a media that thinks only conspiracy theorists disagree with experts, which means you got branded as a person whose thoughts should be disregarded if you told friends months ago that, “No, masks probably help a lot when COVID-19 is around. I mean, they are used in medicine for a reason.”

      And yet!, if all these failures of evidence-based medicine led to the abandonment of evidence-based medicine itself, the amount of snake oil medicine would rise horribly. COVID-19 is bad but most deaths in the world aren’t from COVID-19 — in terms of human lives, the context where evidence-based medicine is important is still larger than the context where it fails. Also, we don’t want COVID-19 treatment attempts dominated by therapies that don’t work, so RCT-like study is still needed for any treatment that could have substantial patient downsides, like adverse effects or the patient being unable to receive a helpful treatment because a useless one was administered instead.

      Right now, some of the best work on COVID-19 is occurring in those areas where the context of evidence-based medicine still does apply well to COVID-19. That’s the OHDSI project, which because of its huge quanitities of emr data can apply standards akin to those from evidence-based medicine while still moving fast enough to find good therapies and strategies for COVID-19. That by itself should be reason not to “shit-list” evidence-based medicine.

      • Martha (Smith) says:

        As I see it, a big part of the problem is that different people have different criteria for what is “evidence-based”. So it’s important to ask, “Just what is the evidence for this claim?”, and evaluate the claim on the basis of the evidence. By “evidence”, I mean not just the final conclusion, but the quality of the plan, the quality of the implementation, the quality of the data, the quality of the data analysis, … There are lots of ways in which a clinical trial can be really sloppy or really good.

        • More Anonymous says:

          Martha, Yes, good point — when one can go back to the original studies and judge their merits individually, that’s often the best approach.

          I should limit my own points to the case where the conclusions of studies are being considered for use by people who don’t go back and judge the studies’ individual merits. However, that’s most of the time in medicine, simply because of the volume of studies published.

      • I’m fine with “using evidence in medicine” what I’m not fine with is “Evidence Based Medicine the political movement”. In fact EBM is often anti-science. The way all of these things should be done, INCLUDING the decisions in normal times about treatments, is to choose the action that has the greatest expected utility based on reasonable priors set by evaluating all the available evidence.

        A reasonable prior for wearing cloth facemasks is that it reduces the exhalation of viruses into the environment by say a factor of 5, and that it reduces the inhalation of viruses by say a factor of 2 or 3. Also cloth facemasks are cheap and have no major downsides when worn for short periods and washed after. The end result should be to recommend their use immediately until evidence comes in that they are harmful or have really no benefit.

        Similarly, we know that a few animals have gotten COVID-19, so we should be recommending that people DON’T let their dogs close to each other in the same way that we don’t let people close to each other. The benefits from being right are high, the cost of being wrong is near zero.

        Even when it comes to some new fancy stent or drug, the right way to handle it is to do basic studies, decide on whether the apparent benefit outweighs the cost, and approve the use early if so, while then continuing to monitor the use in additional studies, and revoking the approval if enough evidence comes in to show that costs outweigh benefits.

        NONE of that is part of EBM as far as I can see. In fact, as practiced, everything I’ve seen is that EBM is mainly a political technique for wielding power to silence critics.

        https://statmodeling.stat.columbia.edu/2020/02/10/evidence-based-medicine-eats-itself/

        and

        https://www.ncbi.nlm.nih.gov/pubmed/26934549

        for example.

        • Martha (Smith) says:

          Daniel said, “Similarly, we know that a few animals have gotten COVID-19, so we should be recommending that people DON’T let their dogs close to each other in the same way that we don’t let people close to each other. The benefits from being right are high, the cost of being wrong is near zero.”

          I’ve only heard of the case of a tiger in a zoo. Can anyone supply other examples? (Or examples where testing was done on animals suspected of having the disease and came up negative? Or are the tests valid in animals other than humans?)

          • Here’s the CDC:https://www.cdc.gov/coronavirus/2019-ncov/daily-life-coping/animals.html

            Key Points

            Coronaviruses are a large family of viruses. Some cause illness in people, and others cause illness in certain types of animals.
            Coronaviruses that infect animals can sometimes be spread to people, but this is rare.
            We do not know the exact source of the current outbreak of coronavirus disease 2019 (COVID-19). The first infections were linked to a live animal market, but the virus is now primarily spreading from person to person.
            The first case of an animal testing positive for the virus in the United States was a tiger that had a respiratory illness at a zoo in New York City.
            We do not have evidence that companion animals, including pets, can spread COVID-19 to people or that they might be a source of infection in the United States.
            CDC is aware of a very small number of pets outside the United States reportedexternal icon to be infected with the virus that causes COVID-19 after close contact with people with COVID-19.
            We do not have evidence to suggest that imported animals or animal products pose a risk for spreading the 2019 novel coronavirus in the United States.
            Further studies are needed to understand if and how different animals could be affected by COVID-19.

            So item 4 shows evidence that animals can get it, item 5 says “we do not have evidence that…” and then item 6 gives the evidence that…

            I can’t roll my eyes hard enough!

            It seems clear to me from reading further on that page that we *DO* have evidence that animals get the disease from humans, and they get sick and shed the virus… this means that animals are a risk to humans, and humans are a risk to animals. It would be really a bad idea to take your dog for a walk, stop to talk to your neighbor at 15 foot distancing… and the whole time let your dogs sniff each other and play around… Dumb…

            The “cost” of telling people to “keep their dogs physically distant from other dogs” is *zero*. The cost of failing to tell people that is potentially more rapid household-to-household transmission through dogs. The number of people out walking their dog each day is significant.

            Poor “prioritize the null until we have overwhelming evidence to reject it” type thinking.

            • Anoneuoid says:

              I’ve been saying for awhile that NHST could be what destroys western civilization. People thought that was hyperbolic but now it should be clear how it can happen. This virus really isn’t much of a threat but if the governments are reacting this way now, just wait until the great adult measles epidemic happens and 10-100 million adults whose antibodies waned in the US alone are sick at the same time.

              Fisher and Lakatos said the same.

              “We are quite in danger of sending highly trained and highly intelligent young men out into the world with tables of erroneous numbers under their arms, and with a dense fog in the place where their brains ought to be. In this century, of course, they will be working on guided missiles and advising the medical profession on the control of disease, and there is no limit to the extent to which they could impede every sort of national effort.”

              Fisher, R N (1958). “The Nature of Probability”. Centennial Review. 2: 261–274.

              After reading Meehl[1967] and Lykken [1968] one wonders whether the function of statistical techniques in the social sciences is not primarily to provide a machinery for producing phoney corroborations and thereby a semblance of ‘scientific progress’ where, in fact, there is nothing but an increase in pseudo-intellectual garbage. Meehl writes that ‘in the physical sciences, the usual result of an improvement in experimental design, instrumentation, or numerical mass of data, is to increase the difficulty of the “observational hurdle” which the physical theory of interest must successfully surmount; whereas, in psychology and some of the allied behaviour sciences, the usual effect of such improvement in experimental precision is to provide an easier hurdle for the theory to surmount’. Or, as Lykken put it: ‘Statistical significance [in psychology] is perhaps the least important attribute of a good experiment; it is never a sufficient condition for claiming that a theory has been usefully corroborated, that a meaningful empirical fact has been established, or that an experimental report ought to be published.’ It seems to me that most theorizing condemned by Meehl and Lykken may be ad hoc3. Thus the methodology of research programmes might help us in devising laws for stemming this intellectual pollution which may destroy our cultural environment even earlier than industrial and traffic pollution destroys our physical environment.”

              Lakatos, I. (1978a). Falsification and the methodology of scientific research programs. In J. Worrall & G.Currie (Eds.), The methodologl, of scientific research programs: Imre Lakatos philosophical papers (Vol. 1, pp. 8-101). Cambridge, England: Cambridse University Press.
              http://strangebeautiful.com/other-texts/lakatos-meth-sci-research-phil-papers-1.pdf

              • Martha (Smith) says:

                Anoneuoid said,
                “just wait until the great adult measles epidemic happens and 10-100 million adults whose antibodies waned in the US alone are sick at the same time.”

                Could you give some detail on why you think adult measles antibodies are waning? (I can imagine various possibilities.)

              • Martha: most immunizations wane. I’m not sure how much measles wanes. I know I got an injection in the mid 1990’s as part of a requirement for attending the university I was at. They were concerned that people had probably reduced immunity.

                The thing about measles is that we have a vaccine available, and still a large number of kids and adults DO get it. If there were outbreaks, we could offer those vaccines right away. Many would hopefully take them. It could be rather devestating to the anti-vaccine crowd, but I can’t imagine a measles outbreak would be widespread and cause more damage than say a year where the flu vaccine targeted the wrong strains. Certainly not like coronavirus where literally everyone is immunologically fairly naive.

              • Anoneuoid says:

                Here is a good paper about it:

                It is clear that while high routine vaccine coverage achieves a temporary elimination of measles infection in the population, the proportion of vaccinees becoming susceptible to the vaccine modied form of infection increases slowly. As shown in Fig. 3, 25 years after starting vaccination, 28% of the total population will be vaccinated, 18% will still have protective vaccine-derived immunity and 10% will be susceptible to vaccine-modied infection. Fifty years after the beginning of the vaccination programme, 55% of the total population have vaccine-induced immunity, and 31% will be susceptible to vaccine-modied infection. It is clear that as time passes, the pool of susceptible individuals will increase until a critical threshold is reached where sustained vaccine-modied measles transmission occurs.

                https://www.sciencedirect.com/science/article/pii/S0264410X03004493

                Its a modelling paper but contains refs to the data you can follow. Maternal antibodies from vaccinated mothers also wane faster than from mothers who had “natural” measles.

                A related issue is vaccinating at just under the eradication threshold which is *also* expected to cause a honeymoon period followed by very large epidemic:

                The second scenario represents the impact of a vaccination programme that reaches high levels of coverage (85% of all new-borns) which are, nevertheless, not high enough to lead to eradication of the agent. However, for the first 15 years after the introduction of vaccination, it appears as if eradication has been achieved, there are no infections. Then, suddenly, a new epidemic appears as if from nowhere. This is an illustration of a phenomenon known as the ‘honeymoon period’. This is the period of very low incidence that immediately follows the introduction of a non-eradicating mass vaccination policy. This happens because susceptible individuals accumulate much more slowly in a vaccinated community. Such patterns were predicted using mathematical models in the 1980s6 and have since been observed in communities in Asia, Africa and South America7. Honeymoon periods are only predicted to occur when the newly introduced vaccination programme has coverage close to the eradication threshold.

                https://academic.oup.com/bmb/article/62/1/187/281114

                Imo, the current measles vaccination program is a huge mistake. Its like it’s designed with redundancy to cause a giant epidemic in adults.

              • Anoneuoid says:

                I can’t imagine a measles outbreak would be widespread and cause more damage than say a year where the flu vaccine targeted the wrong strains. Certainly not like coronavirus where literally everyone is immunologically fairly naive.

                Measles is much more infectious, there are cases where it’s thought one child coughed/sneezed and infected an entire school as everyone walked through it over the course of a day.

                *If* boosters are the answer, they need to given before the problem starts.

            • Martha (Smith) says:

              Thanks, Daniel.

        • Martha (Smith) says:

          Daniel said, “In fact, as practiced, everything I’ve seen is that EBM is mainly a political technique for wielding power to silence critics.”

          I think that there are also cases where people have a quasi-religious belief that something “evidence-based” is like something god-given, whereas “evidence-based” can mean very different things to different people — i.e., different people may have different criteria for something to be “evidence based” in their eyes. It’s like saying “proven” without giving any details about how it was “proven” — it’s just the label that counts for many people, whereas in reality, the devil is in the details.

          • Yes, I agree. Often saying something like “the evidence based treatment is to do X” is used as a way of saying “I know what the right treatment is and YOU are wrong, and I have the science to prove it so shut up”

            It’s political not scientific.

            In almost every real situation there are variations in treatment results that come from variations in things like comorbidities or age or sex or racial/genetic background, or mental health issues, or whatever. Furthermore each person has different things that are important to them. For example, a married parent of small children may have much less tolerance for rare side effects that risk say kidney damage than a single person living alone. Whenever an “evidence based result” comes along that asserts that all people should do X, it implicitly implies that costs and benefits have been balanced in the “Correct” way and that anyone who thinks otherwise is “wrong”. But in fact costs and benefits are value judgements and there is rarely a single correct way (if it were *always* the case the X was better than Y, then Y wouldn’t be an allowed treatment).

            Perhaps a person is more willing to put up with pain because they know that several of their family members have become addicted to painkillers and died, for example. Sure the “average” result from some RCT may say that people recover more quickly if they take such and such painkiller… but it’s *absolutely* wrong to think that this is just a constant for all people.

            All of these subtleties, particularly the part about the implied utility function, seem to be missing in EBM as I have been exposed to it.

            • Martha Smith says:

              “It’s political not scientific.”

              Good way to put it.

            • Rahul says:

              “It’s political not scientific.”

              I think that’s true for a lot of the Covid mess: Scaling up testing, or allowing labs to test or pooled testing etc. are stuck not due to scientific but political reasons.

              The problem is most of us get paralyzed by the politics rather than navigate it to achieve the right policy. Science means zilch, unless you can get it rolled out into action.

              • True enough. Politics is hard. It’s particularly SUPER hard in the US at the moment.

                BTW thanks for the nudge. I’ve had a few different people contact me about the pooled testing idea, and right now I’m trying to focus on the route that probably has the best chance of making a difference. I need to follow up with those guys!

              • Rahul says:

                @Daniel:

                Sure, I hope the pooled testing idea gets more traction! Already I see it getting rolled out in India. Even further I hope the multiplexed testing ideas work out too!

                I think it’s not just politics that is hard, it’s more broadly EXECUTION and translation to practice that’s the hard problem. And politics is a part of that.

                Also, I don’t think that it’s fundamentally a harder problem than measurement or ideation but it’s just that it has been a neglected art.

              • Rahul, can you point me to any articles on rollouts of pooled testing in India? That would be a nice thing to have a reference for. If it’s too early, then at least maybe email me once that hits the news media or something? I’m definitely interested in following up on that.

              • Anoneuoid says:

                Scaling up testing, or allowing labs to test or pooled testing etc. are stuck not due to scientific but political reasons.

                We know close to zero about any of these tests and the efficiency of the sampling procedure. It isn’t going to surprise me when we find out it was like 70% false negatives and 70% false positives.

              • Anoneuoid says:

                We performed a retrospective study that evaluated all nasopharyngeal and bronchoalveolar lavage samples collected between January 1, 2020, and February 26, 2020, from inpatients and outpatients who had negative results by routine respiratory virus testing (respiratory pathogen or respiratory viral panels [GenMark Diagnostics] or Xpert Xpress Flu/RSV [Cepheid]) and had not been tested for SARS-CoV-2. After February 26, 2020, clinical testing for SARS-CoV-2 on individual samples was begun, as recommended by institutional policy. Nine or 10 individual samples were pooled, and screening was performed using reverse transcriptase–polymerase chain reaction targeting the envelope (E) gene.3 Positive pools were deconvoluted and individual samples tested for both E and the RNA-dependent RNA polymerase (RdRp) gene for confirmation.

                https://jamanetwork.com/journals/jama/fullarticle/2764364

                It is strange to me that they only included samples that tested negative to everything else. Isn’t this going to miss anyone with a coinfection and lead to a higher false negative rate since those are more likely to include swabs without enough material on them, etc?

        • More Anonymous says:

          Daniel,

          The way all of these things should be done, INCLUDING the decisions in normal times about treatments, is to choose the action that has the greatest expected utility based on reasonable priors set by evaluating all the available evidence.

          I’d agree if that were achievable in a fairly objective way. But it is difficult enough to get reliable, honestly reported research when the primary endpoint is as simple as 5-year overall survival. Projections of expected utility invite so many more opportunities for nonobvious mistakes and concealed dishonesty that it seems infeasible.

          Regarding cloth face masks, yes I agree and have for a long time — Cloth facemasks have almost no potential downside and if they achieve as little as a 5 percent reduction in the R-effective that would be amazingly worthwhile. Even a tiny reduction in R-effective translates to many prevented deaths and the possibility of earlier workplace reopenings, so it should be obviously worthwhile to do things like wear cloth facemasks. But most expected utility flavored analyses are not clear cut like that. An interesting question could be to try to find good criteria for the border between clear cut and debatable expected utility assessments.

          Regarding the Ioannidis paper, I suggest re-reading it — He is saying that the ideas of evidence-based medicine are worthy, but the movement has been hijacked by industry in many ways. He’s not really saying that flaws in the ideas of evidence-based medicine are responsible for the hijacking, but is instead saying that industry hijacked evidence-based medicine because it will try to hijack any movement where large profits are involved.

          You write

          Even when it comes to some new fancy stent or drug, the right way to handle it is to do basic studies, decide on whether the apparent benefit outweighs the cost, and approve the use early if so, while then continuing to monitor the use in additional studies, and revoking the approval if enough evidence comes in to show that costs outweigh benefits.

          But this is the kind of approach that Ionnadis says industry has already hijacked — RCTs and this too.

          When I see views like yours, what I worry about is that they are the next thing to be hijacked and will be used to end evidence-based medicine for reasons of profit, not patient benefit.

          • Everything will be hijacked, but it’s far easier to hijack something if the expected utility component is hidden under the bedcovers instead of out in the open. If every study had to have an explicit patient utility and social utility function provided by a regulatory body to get to its conclusion, it would be harder to hijack in my opinion.

            Of course, everything gets hijacked, so there must be constant vigilance and push back against changing those patient and social utility functions to benefit drug companies etc. It’s much easier to regulatory capture bodies when the whole thing is behind closed doors. Putting the patient/social utility out in the open allows it to be an open conversation.

            • BTW I’d add that the price of the drug is obviously a component of social utility. The approval should be based on a regulatory price ceiling. The drug company can offer a maximum price they’ll charge, and the regulatory body puts that into the expected utility calculation. The approval is then based on making it illegal to charge more than the bid price. This is a reasonable strategy in my opinion in a world where patents hand out monopoly pricing licenses.

            • More Anonymous says:

              Here’s two reasons that I usually don’t see suggestions similar to yours within medicine and biostats:

              * Detailed expected utility assessments would usually require effect estimates for many different subgroups (by age, by comorbidities, etc.). But in medicine, subgroup analyses of RCTs are notorious for bad statistics, require giant sample sizes, and are a central source of (often successful) attempts to convert failed trials into products that are marketed at population subgroups. On the other hand, new methods of estimating causal effect heterogeneity from RCT data have been a hot topic recently, so maybe there will be useful progress there.

              * In medicine, cost-benefit analyses are often marketing pieces masquerading as science because whether an intervention is judged cost-effective is exceptionally sensitive to hidden assumptions. In my experience, many people who do cost-benefit analyses in medicine are either profit-driven or are (well-intentioned) researchers who see the abstract value in the cost-benefit idea, but haven’t actually tried enough medical cost-benefit analyses to realize their deep problems.

              Still, maybe someone who really likes the idea of cost-benefit analyses could address the problems that currently exist. Good topics might be

              * How to delineate between cases where cost-benefit is clear cut and those where it is debatable? (as mentioned above)

              * How to deal with situations like the following, which are common: You are trying to compare chemotherapies A and B. However, for chemotherapy A there is a study saying that, 15 years after administration, the risk of severe heart disease is increased 2-fold in recipients. That’s a huge increase in risk because heart disease is so common — enough of an increase if that the study’s result is true and B does not carry the same risk, then B is clearly preferable. Yet, it is only 1 observational study saying this (it is rare to study such long-term adverse effects) and the study has confounding issues. On the other hand, the study is very large so can’t be dismissed offhand. Yet, another complication is that no one has even studied 15-year heart outcomes for chemotherapy B, so it could have the same problems (maybe 15 years haven’t even passed since the introduction of chemotherapy B, or maybe it’s simply that no one has studied this topic yet). So, what do you do for your cost-benefit analysis? There is no good answer I know of, and this kind of situation seems like a fundamental flaw in the idea of medical cost-benefit analyses.

              • More Anonymous: I’d first like to just thank you for pushing back and giving specific examples of real world issues. I love Gelman’s blog because of all the conversations like this!

                I agree with you that cost benefit analysis is not just trivial. In fact I think it’s a difficult problem that needs a lot of attention because it is ultimately THE ONLY logical way to make decisions under uncertainty. Recent calls for change in the science world have called for utility based decision making for good reason (I’m thinking of many articles by Sander Greenland and colleagues such as: “The objection we hear most against retiring statistical significance is that it is needed to make yes-or-no decisions. But for the choices often required in regulatory, policy and business environments, decisions based on the costs, benefits and likelihoods of all potential consequences always beat those made based solely on statistical significance” (https://www.nature.com/articles/d41586-019-00857-9))

                I also agree that lots of what *is actually done today* in terms of cost/benefit is just marketing baloney. In part this is because a lot of what’s done in all of medicine is marketing baloney, and in part because marketing people are more innovative. What is done in non-marketing medical fields is largely NHST based baloney and they haven’t learned the utility based baloney yet… ;-) So arguing that it’s mostly bad these days isn’t really arguing against the method, more arguing that the method needs wider adoption before people will start to do good things with it.

                “So, what do you do for your cost-benefit analysis? There is no good answer I know of, and this kind of situation seems like a fundamental flaw in the idea of medical cost-benefit analyses.”

                There *IS* a good answer. In fact, there is a provably optimal class of answers to the question “what do we do under uncertainty?” It’s the Bayesian decision theory methods (the optimality has some technical restrictions, none of which are really important in practice I think).

                Basically what you do in the kind of case you mention is to set up hierarchical models in which the outcomes under unknown conditions are estimated using similar conditions through partial pooling, and then we wind up with reasonable levels of uncertainty for all the cases.

                What you really find out though is that the typical study and the typical “outcomes” of that study are simply insufficient to really get good estimates. The situation “for chemotherapy A there is a study saying that, 15 years after administration, the risk of severe heart disease is increased 2-fold in recipients.”

                is already an example of failure to appreciate uncertainty which is common in medicine and other fields. The 2-fold number needs to become a posterior distribution, where you discover that it’s something like “between .7 and 2.3 fold” once you’ve done a good job of analysis. Pooling estimates for other drugs towards this number gives you the result “we don’t have enough information to distinguish between these but it’s suggestive that this might be an issue and someone should start putting together data and a study etc”

                There ARE very good ways to approach all of this. They are not typically used in medicine. This is a big problem.

              • More Anonymous says:

                Thanks Daniel! I also find these discussion helpful.

                The quote from the Amrhein, Greenland, and McShane nature article is a bit of straw-man because few actually make decisions without consideration of cost, benefits, and likelihoods — even among people who like NHST a lot.

                Its a question of whether the cost-effectiveness analysis occurs in a heuristic, largely qualitatively way (which I support, absolutely) or whether it is fully quantitative (which I think is infeasible and prone to manipulation). I don’t know if Greenland advocates quantitative medical cost-effectiveness analyses.

                In the chemotherapy example I gave, I was trying to communicate a situation in which the cost-effectiveness result is fully sensitive to qualitative choices in the analysis (the large harm is reported by only 1 study, that study has confounding problems, and the other chemotherapy hasn’t even been studied similarly), not the quantifiable uncertainties (is it a 2-fold increase or some posterior distribution with mean near 2).

              • “In the chemotherapy example I gave, I was trying to communicate a situation in which the cost-effectiveness result is fully sensitive to qualitative choices in the analysis”

                This is essentially always true. In your example, how would you suggest the decision should be made?

              • More Anonymous says:

                This is essentially always true.

                In medicine yes, which is what I see as the problem with medical cost-effectiveness analyses.

                In your example, how would you suggest the decision should be made?

                Have a largely disinterest government body perform meta-analyses and a narrative review. The meta-analyses are limited to endpoints that multiple studies consider, the narrative review is careful to mention the issues that can’t be meta-analyzed (like the study on 15-year severe heart disease outcomes).

                Then, for approval or denial of approval, convene a panel of accomplished doctors, epidemiologists, and statisticians from other areas of medicine (other areas to reduce conflicts, which are not only financial but also involve research group think). Have them decide by discussion among themselves that is public.

                In comparison with full quantitative cost-benefit analysis, benefits of this approach are: (a) The contribution of each factor to a full cost-benefit analysis is opaque, but everyone can read the narrative review and decide for themselves. (b) The transparency of the panel discussion and meta-analyses is not only present, but demonstrable to others, meaning that when others refer to it they can judge effectively whether it was fair (not possible for cost-effectiveness analyses because of the expert level required to understand their strenghts and limitations). (c) From my perspective, compromise achieved by discussion among good skilled people (the panel hopefully) is at present more likely to be right than a quantitative cost-effectiveness analysis (democracy works fairly well when good people are involved, but it’s easy to make mistakes in cost-effectiveness analyses and some things are really difficult to quantify right).

                The above is basically the USPSTF approach. In actuality, I disagree with multiple USPSTF decisions, but I don’t see the flaws as being due to the ideas behind USPSTF. Instead, I see them as being attributable to USPSTF not being given enough power, independence, and money. USPSTF has not been given enough power, indepdence, and money for years, and consequently it is easy to fall into the perspective that the fundamental ideas behind USPSTF are at fault, rather than issues of implementation. But actually, I think it is only a problem of implementation. Notably, regardless of its faults, USPSTF also has a track record of being much better than the judgement of professional organizations.

                Sadly, the power, independence, and money of USPSTF are politically unlikely to improve enough irrespective of whichever party wins in the US in 2020. Still, the onus on any proposed alternative to the approach outlined above is to demonstrate there is a good probability that it would to better. I think that it the real world cost-effectiveness analyses would do far worse, honestly, even though in an ideal world I agree they would do better.

              • I’m afraid I see the same issues you do, and yet come to the opposite conclusion.

                The problem with “get a bunch of smart people together and have them talk it out” which is more or less what you’re advocating, is that it’s completely opaque as to how the actual decision was made, what the decision implies, what the criteria were, etc. It’s the equivalent of the deep neural net black box AI. It’s horribly subject to subversion without anyone knowing it. Under the table deals. Payoffs in the back room. It’s a nightmare.

                What we want as a society is a clear picture of how the decision comes about, what the criteria are, and a system that can be debugged… As an alternative to a bunch of connections in a neural net, we want formal computer code that can be read, and debugged. A formal utility function should be arrived at by a bunch of people discussing the issues, and then writing down the formulas/code that they agree on. If they can’t agree on a particular form, then create a Bayesian prior over the utility… In the end though, you need a debuggable formal specification of what we value in the decision. And the version created by the so called “experts” is just the first step.

                A decision should be made based on some list of important outcomes, and a weighting scheme for how much each kind of outcome matters to people in a real world… it should include a bunch of sample calculations, which show how “good” different scenarios are. Test cases, so to speak. All of this is stuff we do in software development all the time, and decision algorithms are just software whether they are calculated in a computer or not.

                Sure, as a first step a bunch of smart people can come up with the code for some utility function… But that’s not the end, that’s just the beginning… Next it’s made public, and the public review begins. Places like this blog test out the code, see if it makes sense… Other people demand that something important to them is taken into account… the politics is out in the open, not just a couple of “experts” who happen to get the ear of the govt and make a business out of taking “legal bribes” under the table from Pharma…

                no, the world won’t move forward in medical decision making until decision making aligns with *real people’s utilities* at least reasonably well. And that won’t happen until millions of people have been able to look at the decision criteria and have their say in what they think is wrong.

              • Austin Fournier says:

                This sounds interesting. Could you tell me where to read up more on these new methods for estimating causal heterogeniety?

              • Austin, I know a lot of people don’t love the 8 schools model, but it really gets at the core of partial pooling. Assuming you have a more complicated causal model, and you have heterogeneity of unknown parameters, you can use the variation across the groups with data to estimate a reasonable range/model for the variation across groups without data. This allows you to impute values even when no data is available.

                See BDA3: http://www.stat.columbia.edu/~gelman/book/BDA3.pdf

                Pg 119, talks about the 8 schools example.

                In the general case you have something like

                Outcomes[i][j] where there are a bunch of outcomes for each treatment i, and you have predictors

                f(params[i], covariates[i])

                you specify that the params[i] all come from a common distribution of parameters across all the treatments… which itself has hyper-parameters that describe this variation.

                Then you fit the model and wind up not only with estimates of the params[i] for the cases where you have data, but for i values where there is no data, the entire posterior over the hyperparameters gives you the range of possibilities for the data-free cases. Sure it’ll have a lot of uncertainty in the data-free cases, but that’s on purpose, since you really don’t know that much about the data-free cases, except that they all have some commonality across the hyperparameter distributions.

              • More Anonymous says:

                Daniel, It sounds like our perspectives are too far apart to reach agreement here. I would prefer that statistical analyses are used to handle the features of a decision making problem that can be known with reasonable confidence (for example, that can be meta-analyzed because enough studies consider them), and that that the remainder of the decision-making process is turned over to discussion by a panel-of-experts. On the other hand, it sounds like you would prefer that a large Bayesian is created that integrates all results from relevant studies, and returns a single distribution that determines the final decision. You worry about the susceptibility of panels of flawed experts to mistakes and to malfeasance, while I worry about the susceptibility to modeling hubris and obfuscation-of-malfeasance through complexity.

                I tried to make this summary even-handed, maybe successfully maybe not.

              • More Anonymous says:

                Austin, Here’s a recent review.

                Rekkas et al. Predictive approaches to heterogeneous treatment effects: a systematic review. 2019; doi.org/10.1101/19010827.

                Here’s an older review I’m more familiar with, so can recommend with greater confidence. But it might not be so up to date.

                Kent et al. Personalized evidence based medicine: predictive approaches to heterogeneous treatment effects. BMJ. 2018; 363: k4245.

                Stephen Senn is always worth reading on personalized medicine, treatment effect heterogeneity, and most topics really.

                Frank Harrell has good posts on this too: fharrell.com/#posts

      • Martha (Smith) says:

        More Anon, Daniel, and others:

        What do you think about this effort to get data on COVID-19 data directly from the public? https://howwefeel.org

        • More Anonymous says:

          I think selection biases will be a big problem, and I’m not sure how they would be overcome. But there are a lot of impressive people on this project, and maybe they know something I don’t.

          In general, I support the idea of asking the public about their medical theories more than most researchers do. I tend to think that 99.9% of what you get will be nonsense, but the remaining 0.1% you get can be gold. There are a lot of motivated, well-intentioned, thoughtful people out there who aren’t in the medical field.

    • chrisare says:

      I don’t understand what is wrong with saying there is no evidence for something, if the statement “there is no evidence” is true. In the COVID19 examples you’ve given, the “there is no evidence” statements are false. The veracity of the statements is then the problem: there’s nothing inherently wrong with claiming there’s no evidence for something.

      • Saying “we have no evidence whether or not X is happening, more information is needed, but until then it’s safest to act as if ….” would be fine. In fact I’d be ecstatic if that’s what people did.

        Saying “we have no evidence for X” and strongly implying that this should mean that you should act as if NOT X; this is bad logic, bad decision making, and at the heart of NHST type thinking where we accept the “null” until it’s been shot down by an overwhelming amount of data. It’s also shockingly common.

        It’s also usually wrong just as a matter of fact. We always have *some* level of evidence about what’s going on. For example in the COVID situation we have the SARS and MERS outbreaks which should be similar, if not exactly the same, and we have a history of studying other types of coronaviruses, which also tell us something about what to expect. Finally, we often have a little direct evidence, it’s just not overwhelming enough to let us “reject the null hypothesis with p less than 0.05” so saying that we have no evidence is often a lie… somewhere above I linked to the CDC commentary on animals/pets where they contradict themselves LITERALLY on the very next line of text after saying they have no evidence…

        Consider that logically speaking it’s possible to say:

        “we have no evidence that masks help”

        or

        “we have no evidence that masks don’t help”

        which one should be taken as the “null hypothesis”… the answer is NEITHER… there’s no such thing as a “correct” null hypothesis…

        So, accurately stating the state of our available evidence, and then making a decision about what to do based on which is the safest bet… GREAT…

        Misstating the state of evidence, strongly implying or even stating that we should act as if a “null” is true until sufficient evidence comes in to override it… That’s terrible.

  2. Anonymous says:

    “..it’s no surprise that some nontrivial percentage of influential American psychology professors are going to have the sort of attitude toward scientific theory and evidence that would lead them to have strong belief in weak theories supported by no good evidence”

    Yeah, and wow, if you think its bad among professors, try scrolling the ranks of licensed professional psychologists and counselors. Every crank treatment that’s ever been dreamed up is readily available from licensed practitioners. It’s strange that only “gay conversion therapy” has come under withering attack in recent years. Not that I believe it works, but it’s just one of hundreds of crank procedures that licensed quacks are pedaling. Why restrict that, but continue to allow all the other cracker-jack treatments?

    So hey, I’m no one to say these things don’t work. But why are they given prestige and validity by a licensing requirement? You don’t need a license to practice astrology, why should you need one for Freudian analysis?

    Thank goodness MDs aren’t allowed to practice with the standard of efficacy used by psychologists.

    • Kyle C says:

      Yes. Beware when, around the third session, a therapist tells you that they now consider you “a friend” and can share with you the offbeat remedy that they only feel comfortable talking to “friends” about. That’s when the crystals and woo come out.

      • Terry says:

        Where did you get this information? Personal experience? Friend experience? Professional experience?

        I’ve never heard this. Genuinely curious.

        • Kyle C says:

          Personal anecdote (happened to me twice, arguably three times) given general assent by therapy veterans among my friends. I don’t know, maybe I’m just a special target for this.

      • Bill Spight says:

        In their training psychotherapists are strongly cautioned against dual relationships. Being friends with a client is questionable, if not a breach of their code of ethics. For your psychotherapist to declare “we’re friends now”, is indeed a red flag.

        • Kyle C says:

          This is one reason (among many) I don’t trust studies of the efficacy of particular psychotherapies. Most people will only be able to book mediocre therapy, at best. I’m affluent and live in a major metro area and this was the best I could find.

        • Martha (Smith) says:

          “In their training psychotherapists are strongly cautioned against dual relationships.”

          Since when does being “strongly cautioned against” something substantially reduce the practice of the behavior? People are strongly cautioned against driving while intoxicated, but lots of people do it anyhow, despite possible strong consequences if they’re caught doing it.

    • Jake says:

      What other of the ‘hundreds of crank procedures’ are as violent and abusive as gay conversion therapy?

      • jim says:

        “What other of the ‘hundreds of crank procedures’ are as violent and abusive as gay conversion therapy?”

        • jim says:

          The more challenging one’s mental health issues, the more damaging it is to be mislead by providers. People with mental health issues frequently commit suicide and murder, so while I can’t know the outcome of every gay conversion therapy, I can imagine that there are many situations where crank therapy is just as bad as gay conversion therapy.

          • Indeed, I heard a story where a minor was allowed to have an injectable drug that stabilized his severe mental illness but as soon as he turned 18 they canceled that prescription for insurance reasons, despite multiple warnings from the primary mental health provider. Within weeks he had murdered someone and was serving life in prison.

            So, just business as usual in mental health care is pretty bad.

          • Jake says:

            > I can imagine that there are many situations

            Name three.

            Gay Conversion Therapy is not just being ‘mislead’, it’s also physically violent : https://www.nytimes.com/2018/01/24/opinion/gay-conversion-therapy-torture.html

            “””But it didn’t stop with these hurtful talk-therapy sessions. The therapist ordered me bound to a table to have ice, heat and electricity applied to my body. I was forced to watch clips on a television of gay men holding hands, hugging and having sex. I was supposed to associate those images with the pain I was feeling to once and for all turn into a straight boy. In the end it didn’t work. I would say that it did, just to make the pain go away”””

            https://www.huffpost.com/entry/realities-of-conversion-therapy_n_582b6cf2e4b01d8a014aea66
            “””The first step ― which usually lasted six months ― [is] where they “deconstruct us as a person.” Their tactics still haunt me. Aversion therapy, shock therapy, harassment and occasional physical abuse. Their goal was to get us to hate ourselves for being LGBTQ (most of us were gay, but the entire spectrum was represented), and they knew what they were doing…. The second step of the program, they “rebuilt us in their image.” They removed us of everything that made us a unique person, and instead made us a walking, talking, robot for Jesus. They retaught us everything we knew. How to eat, talk, walk, dress, believe, even breathe. We were no longer people at the end of the program.””” / “TC said that the conversion therapy sessions would take place every weekday, with shock therapy treatments lasting approximately an hour, and aversion therapy lasting three.”

            So come on. Let’s hear about some of those ‘hundreds of crank procedures’, and also let’s hear about how they’re happening by the tens of thousands.

    • Anonymous says:

      To some extent I think it’s worse with MDs because lack of evidence is shrouded in the imprimatur of natural sciences. That is, there’s a prevalent idea, often implicit, that “because Theory A invokes a physical system, and seems plausible, it’s reasonable to assume it’s true.” It’s a recipe for disaster. See: hydroxychloroquine, lower back pain surgery, etc.

      One of the reasons for the current status of therapy among practitioners is because the evidence-based practice paradigm is rife with problems itself. Replicability failures, conflating protocol with underlying mechanism, etc. For years “science” based programs have wrapped themselves in putatively rigorous research that ends up being much ado about nothing. Many practitioners recognize this; affording them some latitude as professionals is warranted.

      The problems with EBM extend everywhere into healthcare though. The larger problem is a paradigm of “any claim that appears in a peer-reviewed journal is fair game as a practice basis.” I know this because I’ve seen it firsthand.

      It’s not a crisis of psychology, it’s a crisis of academic-based healthcare and how much of the medical professional world (and academics itself) interprets scientific literature.

  3. steven t johnson says:

    “Just to be clear: No, I don’t know that astrology is a waste of time…”

    Italicizing the word know may be the equivalent of scare quotes, but to me, it still suggests some peculiar standards for knowledge. I don’t know that “Andrew Gelman” isn’t a figment of my theory of mind. Nor do I know that the computer I’m reading it on isn’t a figment of my imagination (or possibly a divine or demonic illusion—curious how one can’t really decide which.) Nor for that matter do I know that “I” am a illusion, complete with wholly imagined memories. I’m afraid I’m inclined to think that any philosophy of science that forbids someone to say “There is no magic” is a failed philosophy of science. There seems to be a covert insistence that the word “know” can only mean something like “a necessary and universal logical deduction from incontestable first principles which cannot be refuted by verbal claims of an unsupported, possibly even unspecified alternative proposition.”

    I think if we minimally define “knowledge” as measurement* broadly defined (factual description with valid comparisons, contrasts and generalizations,) we immediately imply that measurement is not an individual mental process at all, but we must acknowledge the inevitability of errors, hence possible correction…but we do not accept “corrections” by hypotheticals. As in this case, the influence of stars when all the physics we know refutes the possibility. Broader notions of knowledge that start from here also rule out reincarnation, karma, the nonexistence of the self, solipsism, philosophical zombies, the soul, ESP and yes, the hard problem of consciousness or computer simulations of a universe. The collective aspect of science tends I think to solve Gettier problems.

    *A narrow definition of measurement can lead to limiting “science” to laboratory experiments. This doesn’t cause too many problems for the natural sciences but it does for the historical sciences, which overlaps both natural and social sciences. Worse, when coupled with a notion of correction of errors by hypotheticals instead of evidence, well, that gives us not just Michel Gauquelin’s statistical study of the Mars effect, parapsychology and so on, but the Jonathan Haidts.

    • Anoneuoid says:

      I’m not going to defend the specific details people come up with but I don’t really have a problem with any of these things as a general concept:

      Astrology: Different positions of the planets have gravitational effects on the activity of the sun, which then affects cosmic rays, etc that hit earths atmosphere, which in turn affects the weather/climate/etc.

      Psychic: We are surrounded by invisible means of communication (wifi, cell phones, etc), why couldn’t there be a biological means of doing the same? In fact the lack of any known organism communicating that way makes me suspicious that something is being overlooked.

      Reincarnation/spirits: Maybe we live in a simulation that’s itself in a simulation, etc.

      I wouldn’t say I “believe in” all that but in general I tentatively accept ideas rather than believe them.

      I mean how many here believe the heart is a pump?

      https://www.ncbi.nlm.nih.gov/pubmed/23680376
      https://www.rsarchive.org/RelArtic/Marinelli/
      https://www.rudolfsteiner.org/fileadmin/user_upload/news/Furst-ptj4201033.pdf

      • steven t johnson says:

        “Astrology: Different positions of the planets have gravitational effects on the activity of the sun, which then affects cosmic rays, etc that hit earths atmosphere, which in turn affects the weather/climate/etc.”

        It is not the different positions of the planets against the backdrop of the constellations—which are imagined patterns, not actual groups of stars barring a handful of exceptions like the Pleiades—but the distribution of masses and their changing positions relative to *each other* which might have effects on the activity of the sun. The last I looked there was no discernible correlation between the changes in positions relative to each other to sunspots or cosmic ray activity. Even more, any such suggestion should largely reduce to discussing the effects of Jupiter, with others largely negligible. And yet more, the effects of cosmic rays have nothing to do with human personality, the fates of nations or the birth of Jesus Christ, all things attributed to astrology.

        I’m sorry but I think this is an excellent example of refutation by hypotheticals. Given that Andrew Gelman doesn’t believe in knowledge (and loves Popper,) I don’t know whether he accepts that laws of nature count…but I do. Therefore finding hypotheses about how “astrology” might somehow be connected to the incidence of cosmic rays may result in a refutable hypothesis. But in my view it isn’t science. Plausible hypotheses, coherent with known laws of nature and conforming to experience, are abundant enough to keep scientific research continuing. The fact that astrology has not sufficed to provide successful predictions (and some versions do,) or even provide discernible knowledge about personalities means no astrological experiments are science. Unless you prescriptively define “science” as a particular method, a la Popper. But this is wrong. In the worst cases this ends with people claiming “science” was invented in England sometime between Bacon and Newton.

        (To be as fair as possible, Popper would possibly allow that astrology’s inability to actually specify what any particular horoscope actually means makes it unscientific. Of course, if someone makes the same criticism of string theory, like Peter Woit, somehow agreement disappears.)

        • Anoneuoid says:

          The last I looked there was no discernible correlation between the changes in positions relative to each other to sunspots or cosmic ray activity.

          Source? Also, ifaict Andrew seems to follow Lakatos, not Popper.

          • Paul Hayes says:

            See e.g. here.

            More generally, I agree with Steven and think that what he says applies to other things that you seem to be excessively open-minded about too.

            • Anonymous says:

              Hmm a single isotope concentration from a single ice core dated to 320-340k years ago. They found the same thing as the paper they were debunking but used statistical significance to say it didn’t match up:

              For
              our Fourier spectrum, we find significant (>99%) peaks at four
              of the five periodicities (88, 104, 150, and 208 years) cited by
              Abreu et al. (2012) as being common to those found in their
              planetary torque model. For the cycle at 506 years, however,
              we find it is consistent with red noise. We also find peaks at
              130 years (>99%) and 350 years (95%). Unfortunately, Abreu
              et al. do not show significance levels in their Fig. 5, so it is dif-
              ficult for us to make a quantitative comparison.

            • Anoneuoid says:

              To mods: I think I accidentally submitted a half-complete post…again. Please delete.

              Hmm a single isotope concentration from a single ice core dated to 320-340k years ago. They found the same thing as the paper they were debunking but used statistical significance to say it didn’t match up:

              For our Fourier spectrum, we find significant (>99%) peaks at four of the five periodicities (88, 104, 150, and 208 years) cited by Abreu et al. (2012) as being common to those found in their planetary torque model. For the cycle at 506 years, however, we find it is consistent with red noise. We also find peaks at 130 years (>99%) and 350 years (95%). Unfortunately, Abreu et al. do not show significance levels in their Fig. 5, so it is difficult for us to make a quantitative comparison.
              […]
              In Fig. 3 we show the same procedures applied to our 10Be flux for the period 325–336 kyr. For the Fourier spectrum, we find only one highly significant (>99%) peak having the same periodicity (104 years) cited by Abreu et al. (2012) There is also a modestly significant peak (∼95%) at 150 years. The other frequencies cited by Abreu et al. (2012), at 88, 208, and 506 years, all have a significance of <95%, and appear consistent with red noise.

              To me it looks like they saw the same thing but their data was noisier since it was from 300k years ago instead of 10k and they used a single ice core instead of multiple ice cores and tree rings like used for the original study:
              https://ui.adsabs.harvard.edu/abs/2013EGUGA..1510070A/abstract
              https://www.pnas.org/content/109/16/5967

              • Paul Hayes says:

                Far from debunking that Abreu et al paper, they were looking for (and failing to find) an empirical consistency that one would expect to be able to find if the planetary influence hypothesis were correct. The Abreu paper had already been debunked – at least to some people’s satisfaction. Such is often the fate of claims of “dramatic advance in our basic understanding”.

                Then you say, “to me it looks like they…”, as if it didn’t look to them like they… too!

                And of course – as Steven pointed out – so what anyway? Confirmation of Abreu et al would’ve yielded a literally negligible increase in support for the claims of astrologers.

                “Refutation by hypotheticals” indeed. I once had a very similar discussion with Brian Josephson in the comments section of some medical journal, concerning some paper about homeopathy that had been published there. Although he would only accept that the “water memory” hypothesis per se was currently empirically unsupported, and that it was at least reasonable to be skeptical of his theory based arguments in its defence, he did concede the salient point: that either way, “water memory” would offer no support to the supposedly dependent “homeopathy hypothesis”.

              • Anoneuoid says:

                Far from debunking that Abreu et al paper, they were looking for (and failing to find) an empirical consistency that one would expect to be able to find if the planetary influence hypothesis were correct.

                They did find it, look at the data and read it again. There were peaks near the predicted frequencies. They drew the wrong conclusion from their data because they think not statistically significantly different from their model of red noise means it proves something doesn’t exist.

                In science you compare the observations to the prediction, not an arbitrary null hypothesis.

                And it is actually surprising we are able to see that in one random ice core sample that has been influenced by all sorts of local peculiarities. Actually that makes me wonder about how exactly the 10-Be levels and dates for that core were arrived at, maybe they were somehow informed by the levels from elsewhere.

              • Anoneuoid says:

                The more I think about it, that “No evidence for planetary influence on solar activity” paper is one of the most egregious uses of NHST I have ever seen. And they even put that idiotic “no evidence” phrase right in the title.

                I’ll probably be using it as an example going forward.

              • Anoneuoid says:

                Also, I see this has already been published:

                According Cauquoin et al. (2014) own figures, their 10Be concentration and flux records are quite noisy and likely effected by a low frequency component that they detrend with a ltering: see Fig. 1A. Only four secular frequencies were found at about 103 year (99 % confidence), 115 year (≈95 % condence), 130 year (99 % condence) and 150 year (≈95 % confidence). Note that Cauquoin et al. (2014) highlighted with dash black lines only the spectral peaks at the 99 % confidence level ignoring the other two spectral peaks that can be discerned about at the 95 % condence level curve.

                […]

                However, it is evident that Cauquoin et al. (2014) could not make any definitive conclusion about periodicities shorted than 100 years and larger than 150 years because according their own confidence level estimates their records do not have sufficient statistical power in those time scales, as evident in their own figures reproduced herein in Figs. 1B and 1C.

                […]

                In conclusion, both Cauquoin et al. (2014) and Steinhilber et al. (2012) 10Be and 14C solar proxy models present 4 common spectral peaks at about 103, 115, 130 and 150 years (confidence level ∼95 % or larger). Two of these frequencies appear to be predicted by Abreu et al. (2012) planetary model, although with some doubt (Poluianov and Usoskin 2014). However, all four frequencies are predicted by the three-frequency non-linear solar-planetary model proposed by Scafetta (2012c) that combines the major Jupiter and Saturn’s planetary decadal harmonics with the Schwabe 11-year solar cycle and further requires a nonlinear processing, whose existence is very reasonable given the fact that both solar and geophysical systems are nonlinear.

                https://link.springer.com/article/10.1007/s10509-014-2111-8

              • Paul Hayes says:

                They did find it, look at the data and read it again.

                They didn’t. I suggest that you read it [again]: at best you seem to have very badly misunderstood it.

                First they ran their similar analysis on the same data as Abreu et al purely as a consistency check*. Hoping and expecting to find the same results! Why? So that the the next thing they did** would be directly comparable. And they did indeed find the same results. But it’s that next thing they did that’s the meat of the paper: they ran their analysis on data from an older period to see if they’d find the Abreu et al periodicity matches there too. They didn’t.

                * “To confirm that our analysis procedure is consistent with that used by Abreu et al. (2012) […]”

                ** “the same procedures applied to our 10Be flux for the period 325–336 kyr. […]”

              • Anoneuoid says:

                I quoted all that, first they reran it on the Holocene (10k to present) data based on 10-Be from multiple ice cores and 14-C from tree rings. Then they ran it on the 10-Be data from one segment of an ice core dated to 330k years ago. They saw the same peaks:

                In Fig. 3 we show the same procedures applied to our 10Be flux for the period 325–336 kyr. For the Fourier spectrum, we find only one highly significant (>99%) peak having the same periodicity (104 years) cited by Abreu et al. (2012) There is also a modestly significant peak (∼95%) at 150 years. The other frequencies cited by Abreu et al. (2012), at 88, 208, and 506 years, all have a significance of <95%, and appear consistent with red noise.

                You can look at the chart and clearly see the predicted peaks. The statistical significance calculated based on the model of red noise is entirely irrelevant.

                They should calculate it based on the Abreu 2012 model that predicts certain peaks should be there (but of course not that those must be the only peaks). I guess there were other models in the literature like Scafeta 2012 that made slightly different predictions for the peaks. Those predictions should be compared to the data too.

              • Paul Hayes says:

                Also, I see this has already been published:

                And I’m sorry but I’m not going to deign to read it – or anything else you manage to google off the bottom of the barrel.

              • Terry says:

                Anoneuoid:

                I ran across some French and US data that seems to show CV infection rates are much lower among smokers:

                https://twitter.com/KlausKblog/status/1249048902495592450

                French data is at the top, scroll down a little to see US data.

              • Anoneuoid says:

                And I’m sorry but I’m not going to deign to read it – or anything else you manage to google off the bottom of the barrel.

                I checked a couple of the papers listed as citing the one you shared, that was the only one that did more of a passing mention.

                But anyway, it doesn’t seem like you are reading my posts or even the paper you posted either!

              • Paul Hayes says:

                They should calculate it based on the Abreu 2012 model that predicts certain peaks should be there (but of course not that those must be the only peaks). I guess there were other models in the literature like Scafeta 2012 that made slightly different predictions for the peaks. Those predictions should be compared to the data too.

                Sure. And they shouldn’t forget to accompany those calculations with the revision of classical mechanics that would yield those peaks as genuine phenomena (rather than aliasing artifacts). It would be foolish to claim only one major advance in our understanding when two are up for grabs.

              • Anoneuoid says:

                Sure. And they shouldn’t forget to accompany those calculations with the revision of classical mechanics that would yield those peaks as genuine phenomena (rather than aliasing artifacts). It would be foolish to claim only one major advance in our understanding when two are up for grabs.

                Why are these artifacts exactly where the model (which was devised beforehand) predicted?

              • Paul Hayes says:

                What do you mean? The artifacts are entailed by the [classical mechanical solar system] model plus that particular wrong sampling frequency. Do you mean why do they happen to sort of match peaks found in some ice core etc. data? Who knows, who cares?

              • Anoneuoid says:

                The artifacts are entailed by the [classical mechanical solar system] model plus that particular wrong sampling frequency.

                This is difficult to make sense of.

                1) Are you saying artifacts in analysis of ice core data logically follow from a model of the solar system?
                2) Are you saying that analyzing any random data in that fashion results in the same peaks as predicted by the solar system model?

              • Paul Hayes says:

                This is difficult to make sense of. […]

                What on earth are you talking about? It couldn’t be easier. The set of predicted peaks in planetary torques comes from the planetary motion model. The ice core data, with its set of [sort of matching] peaks, is a record of solar activity. The former, supposedly, provides a causal explanation of the latter. Wrongly supposedly because those predicted torque peaks are illusions (aliasing artifacts) – as that Poluianov and Usoskin paper points out.

                Anyway, I’ve had my fill of this nonsense now. Happy lockdown!

              • Anoneuoid says:

                What on earth are you talking about? It couldn’t be easier. The set of predicted peaks in planetary torques comes from the planetary motion model. The ice core data, with its set of [sort of matching] peaks, is a record of solar activity. The former, supposedly, provides a causal explanation of the latter. Wrongly supposedly because those predicted torque peaks are illusions (aliasing artifacts) – as that Poluianov and Usoskin paper points out.

                Anyway, I’ve had my fill of this nonsense now. Happy lockdown!

                Ok, well I’ll stick with paying attention to how well models predict the results of new data. You keep on doing whatever it is you do. I could imagine a case where there was something odd going on where the method of processing the data automatically generated artifacts that matched up with the model, but neither your posts nor the paper you provided demonstrate this. Now you are citing a new paper…

              • Anoneuoid says:

                Yea, this is what a I thought. The original Aberu 2012 paper compares years to years:

                All records are interpolated to one year using a cubic spline ( 10 Be, 14 C, φ) or averaged over one year (torque)

                https://ui.adsabs.harvard.edu/abs/2012A%26A…548A..88A/abstract

                The Poluianov and Usoskin paper strangely reports that averaging over different time intervals changes the spectrum (ok, obviously), but then are silent on the obvious question of whether it changes the ice core spectrum as well in the same way…

                I mean I never heard of this Aberu 2012 paper before this thread but so far that is two oddly unconvincing debunkings.

        • Terry says:

          Astrology is a social construct, and astrology isn’t real.

          But social constructs that aren’t real have enormous effects on all of our lives. Therefore astrology is incredibly important. Plus, my lived experience proves that astrology is true.

          Checkmate.

          • steven t johnson says:

            Nesting confuses me, so if this isn’t a response to my comment, my apologies. I suspect irony, so I apologize twice for not seeing the obvious.

            Astrology is a subset of religion, replete with the names of gods, which is also a social construct and is important in the same way…but I have no idea how any statistical testing of astrological predictions/personality analyses is any more science than statistical testing of the efficacy of prayer. Those studies are entirely misconceived as scientific study of religion. There is no room in our knowledge of the universe thus far for imagined objects in the sky (constellations,) to exert an influence on the effects of other solar bodies on the people of the earth. And there is no known way for any significant way even for those bodies to exercise any but the most negligible effects now. Planets had an enormous importance in the formation of the solar system. But none of that is what astrology is about.

            On the second point, your lived experience is a Gettier problem: You may have been deceived by coincidence. This is especially true since the statistics of coincidences in daily life* are so far as I know practically a virgin field. In science as I understand it—though I confess this may not be the position of philosophers of science—personal experience becomes scientific evidence when others share the experience, but others do not. Measurement, which broadly considered includes statistical analysis, is the best way, which is why an essential part of science is figuring out how to measure. Measuring means standards of measurement, which is not an individual thing.

            *Consider how many people like to talk about the suspicious deaths in connection with, say, the Kennedy assassination…but no one knows what the base rate of mysterious deaths is, or even what “mysterious” means in this context. Everyone who doesn’t die in a hospital of a diagnosed illness? Prosecutors’ fallacy.

            • Terry says:

              For the record, yes my post was meant as a reply to yours. Yes it is ironic. But no, I didn’t mean any particular criticism of what you said. I was just taking the opportunity to be generally obnoxious.

              • steven t johnson says:

                You tricked me into thinking…but then I wrote a response, so I suppose we’ve tied in obnoxiusness. (Obnoxity should be a word, shouldn’t it?)

  4. Anonymous says:

    “I had the vague sense of mainline Protestants as being the sensible people, but they have the same rate of believe in astrology as the general population.”

    By that measure Evangelicals come in at a very respectable second place. But the atheist numbers surprise me the most. To me atheism is lack of belief in supernatural things, so why do 22% of atheists still believe in supernatural things? And the “nothing in particulars” are the worst offenders when it comes to supernatural beliefs.

    to me the evidence here suggests that, if you have either strong monotheistic beliefs (evangelicals) or strong non-believe (atheists), then you’re much less likely to believe in other supernatural stuff.

    • Peter Erwin says:

      Evangelicals have a tendency to be suspicious of anything “spiritual” or “supernatural” that isn’t directly part of (their interpretation of) Christianity, since it might be “pagan” and thus possibly Satanic in origin or influence. So the somewhat lower rate among Evangelicals isn’t at all surprising (and doesn’t necessarily reflect their being especially “sensible”.)

  5. Sam says:

    Isn’t “let a thousand flowers bloom” a bit inappropriate, given what Mao did to the flowers after he’d let them bloom?

  6. Dominik Luks says:

    I think you have this exactly right on the social pseudo scientists and exactly wrong on the astrology. There are at least two problems:

    1. The key quote from the article to understand this phenomenon is: “In its penetration into our shared lexicon, astrology is a little like psychoanalysis once was.” It is not just important to look at what people answer on a binary question (or likert – the PEW report doesn’t say which) but look very carefully at how they use this data and the inferences they make. You need an ethnographic approach. I’ve been informally keeping an eye on how astrology (mostly star signs) and spirituality are used by people I observe for a long time. All it is used for is to make sense of relationships that are observed. Very often this is simply a way to structure conversations and label attitudes. It is just as common to hear people say ‘she’s a typical virgo’ as ‘she’s different from all the other virgos I know’ or even ‘in this way, she’s not a virgo’.

    In the same way, it is interesting to listen to people who go get a reading on how they interpret the results. Again, this is very often just a way to structure thinking about events that took place rather than a straightforward decision making. But even where it is used to make decisions, it is mostly in cases of uncertainty where a coin flip would be just as useful. (See points made about ‘auguries’).

    2. The quote “the idea at the heart of astrology is that the pattern of a person’s life—or character, or nature—corresponds to the planetary pattern at the moment of his birth” is exactly the wrong way to look at this. The article gets this right “Despite common misconceptions, an astrologer is not a fortune-teller. In a chart reading, she doesn’t predict the future; she describes the client to herself”

    What this means that there are 2 ways to interpret the Pew survey. Either people believe something about determinism and will structure their lives around it, or people will say ‘there’s possible something to this’ when asked on a survey. The survey report does not distinguish the 2. But I’d wager, it’s the majority.

    This is also why we should be much harsher about the Wansinks and Dwecks of this world than people who use astrology. That’s because Dweck actually believes she’s describing true facts while most people who talk about star signs approach them much more skeptically (similarly to how magic is actually used by peoples where witchcraft is used).

    I made this argument some years ago in this blog post on how we tolerate actually harmful garbage form people with prestige and sneer at innocuous beliefs by those without: http://metaphorhacker.net/2013/09/storms-in-all-teacups-the-power-and-inequality-in-the-battle-for-science-universality/. It is also important to note that this is often gendered (note the huge gap between men and women in ‘belief’ in star signs).

  7. Bob76 says:

    This reminds me of an incident that occurred many years ago. My uncle had hired a dowser to find a good location for a well on his property. My aunt criticized him for this choice. He replied, “Well, he has a real good track record. I think he uses surface geology and vegetation as clues. The “dowsing” part is just so that he doesn’t teach others his knowledge.” Saying ‘It’s magic’ could be a good way to protect intellectual property.

    Intelligent people believe in all kinds of non-scientific “facts” and base important decisions on those facts. I had a very intelligent friend who turned to alternative medicine when he was feeling unwell. When that failed, he tried western medicine; they diagnosed an advanced cancer—but it was too late then.

    Bob76

    • Anoneuoid says:

      Why do you accept the “too late” explanation for the failure of “western medicine” but not “alternative medicine”? I use the scare quotes because I think both categories include a mixture of good and bad.

      • Bob76 says:

        I accept the “too late” theory because he had prostate cancer. The 15-year survival rate (with “western medicine”) for prostate cancer that is detected early is in the 90+% range. The 5-year survival rate for patents with metastasized prostate cancer is around 30%. Delaying treatment of symptomatic prostate cancer in a 50-year old male is thought to be a real bad idea. I believe he did not visit a practitioner of “western medicine” until he was quite ill.

        I don’t think incense and acupressure do much for prostate cancer. On the other hand, surgery, radiation, and hormone-suppressing-drugs do wonders.

        As I say, he was a smart guy—but he had become part of a New Age community and went along with the group’s beliefs.

        Of course, I agree that both categories of medicine include good and bad. Meditation good; lobotomy bad. But I believe that, on average, for a wide variety of common disorders, western medicine is clearly vastly superior.

        Bob76
        PS. I loved him like a brother and I miss him so very much two decades later—that probably colors my analysis.

        Bob76

        • Anoneuoid says:

          The 15-year survival rate (with “western medicine”) for prostate cancer that is detected early is in the 90+% range.

          I think you meant 5-year, but the crucial info we are missing is what is the x-year survival in people who don’t get treated? And of course the years survived since diagnosis depends on how much screening is done, etc.

          I don’t think incense and acupressure do much for prostate cancer.

          I’ve never looked into it but not really interested in doing so.

          Of course, I agree that both categories of medicine include good and bad. Meditation good; lobotomy bad. But I believe that, on average, for a wide variety of common disorders, western medicine is clearly vastly superior.

          Then again I met someone who was told to get a colonoscopy as part of a routine checkup and then later got told that damage during the colonoscopy gave them colon cancer.

  8. jim says:

    What we should take away from all of this is that, when a prediction or treatment is rigorously tested and succeeds time and time again, we keep using it, usually to good ends. Things that aren’t rigorously tested or continually fail rigorous testing (astrology, miracle prediction) usually don’t work out too well.

  9. Terry says:

    I wonder if anyone here believes in astrology? But would anyone here be brave enough to say so if they did?

  10. oncodoc says:

    People who are Aquarius should take long nature walks today according to one horoscope. Dr. Gelman, this means you!

  11. Jonathan (another one) says:

    In The Secret of Our Success, Joseph Henrich argues that belief in random signs is an adaptive trait that allows us to randomize at an individual level when strategic randomization needs to be done. Paying attention to the random predictions of astrology is just a benign side effect.

    • Andrew says:

      Jonathan:

      Yes, that’s related to my conjecture that maybe Wansink etc. are doing some good, and that p-values and the other tools of cargo-cult science are useful tools for Wansink, Fiske, etc., as these tools give these researchers the license to follow their intuitions.

    • steven t johnson says:

      Given that I see zero reason to think astrologers are any more successful at randomizing than any other person versus much reason to think astrologers aim to please; that I see no reason to think that people are so uniformly successful at resorting to strategic randomization when they need to but not other time; that I see no reason to believe the differential reproduction of men and women who resort to random signs at just the right time is so high that the positive selection would turn this into human nature; that such a proclivity to be random is even easily distinguishable from the proclivity of children to believe what their parents and peers tell them, which does strike me as at least an alternative hypothesis that should be eliminated….Well, this all strikes me as nonsense, made-up ideas, mere hypotheticals. The first question, “Is there really a problem to solve?” is important but the next question, “Is there an alternative hypothesis?” is vital.

      I find it much more likely that people who think astrology is useful or works are confusing there comfort in making a decision they wanted to make already but were uncertain about with actual success. To even start to argue astrology “works,” you need first to demonstrate the beneficial effects of actually making any decisions, even those validated by a Noble Lie of astrology, is greater than the effects of dithering. This is why just coming up with a refutable hypothesis isn’t enough to make it science. Astrology is nonsense because the constellations are merely imagined. They aren’t even things.

  12. Z says:

    One time when I was attending JSM I told someone else staying at my airbnb that I was a statistician and he excitedly said “oh, I’m an astrologer!” as if we were in related fields

  13. paul alper says:

    Andrew wrote:

    ” I had the vague sense of mainline Protestants as being the sensible people, but they have the same rate of believe (sic) in astrology as the general population.”

    In heaven’s name, why would one believe that mainline Protestants hold more sensible views than other monotheists? And, is it an accident that Andrew chose this topic to appear the day between Good Friday and Easter Sunday?

    • Andrew says:

      Paul:

      I can answer your second question. It is an accident. I wrote this post many months ago and just stuck it in the next available slot. I had no idea what day that was on any religious calendar.

      And, yeah, sure, religions can be pretty silly. To me, supernatural religious beliefs or modern-day equivalents such as the Loch Ness monster seem different to me than astrology, embodied cognition, ESP, etc., which are claimed to be supported by science rather than just impassioned personal testimony.

      • Anoneuoid says:

        The Loch Ness monster exists, but isn’t what you think. Its a woman in town who picks up younger tourist guys at the bar and brings them home. Then afterwards she wants them to leave so wakes up her son who gets really pissed off and chases them out of the house with a gun or sword or something (or maybe he was just really big, I don’t remember). When I was there, there were newspaper articles pinned to bulletin boards about it to warn travelers.

        • Martha (Smith) says:

          Thanks for another good laugh for a rainy day.

          • Anoneuoid says:

            Glad someone appreciated it. The other thing I remember about Loch Ness was the midges, there were giant swarms of those.

            • Martha (Smith) says:

              I don’t recall many midges at Loch Ness, but do remember lots at Sligachan on the Isle of Skye.

              • Anoneuoid says:

                I’d never experienced them before and we walked like 5 miles to get to the loch through them.

              • jim says:

                Anoneuoid says: “I remember…the midges at Loch Ness”

                You’ll have to check out the blackflies in Ontario or the skeeters in Nunnavat some time for comparative purposes.

                Once I broke my chain while mountain biking in the Gatineau Mts on the Qbc-Ontario border. I tried to fix it with the chain tool but the blackflies were so bad after a few minutes I just picked up my bike and ran the last couple of miles to the end of the trail and jumped in my car

              • Martha (Smith) says:

                jim said,
                “You’ll have to check out the blackflies in Ontario …”

                I’d forgotten about them! I remember when I was in my teens spending a week with a friend’s family at their cabin on Lake Matinenda, near Blind River, Ontario. The blackflies were thick. I remember that they tended to bite right along the hairline or hat rim.

        • Now that seems like you had a dream about some sultry woman on the prowl. LOL Just kidding.

  14. Renzo Alves says:

    The word “evidence” means different things to different people, and has changed over time. Recommended reading: The Emergence of Probability (2nd ed.) by Ian Hacking, Cambridge UP, 2006.

  15. David P says:

    Interesting. There’s no evidence that jews or muslims believe in astrology.

    • David J. Littleboy says:

      Don’t forget the Buddhists! (I have no idea whether or not Buddhists do astrology.)

      FWIW, the Japanese largely don’t do astrology, but they love doing astrological type things with blood type. Friggin inane. Drives me nuts. But they really seem to think that there’s a correlation between blood type and personality. (The Japanese also largely don’t do religion (as we understand “religion”) either. Having spent half my life living in a country with no religion and universal (and affordable) health care, I sometimes forget what a tough life my compatriots have.))

    • Peter Erwin says:

      There’s no evidence that jews or muslims believe in astrology.

      Of course there is. (I love the attempt to suggest connections between the 12 tribes of Israel and the 12 astrological signs, because why not?)

      Belief in and practice of astrology has a very long history in Islamic societies, along with quasi-scientific debates about its practicality and religious condemnations of it. The medieval European understanding of astrology was based in part on translations of Arabic works about astrology. Modern Islam generally still condemns astrology, but that hardly prevents there being, for example, astrology columns in Turkish newspapers.

      Astrology is also quite popular in Buddhist-majory Sri Lanka.

      There are, of course, distinct Hindu and traditional Chinese versions of astrology as well; some Indian universities offer degrees in astrology.

  16. Dale Lehman says:

    This has been bothering me since this was posted: it seems analogous to issues regarding NHST. Are people asked: do you believe in astrology, yes or no? Is it yes, no, I don’t know? Is it an ordinal scale of degree of belief (which is then recategorized into belief/no belief)? Unusually for Pew Research, I have not been able to find any documentation of what people were actually asked – it would appear that the same survey asked about belief in God, and the survey questions were provided regarding that (and there were several questions, asking about specific beliefs about God related to how they live their lives and how they interpret God’s presence in their life). Strangely, though I can find nothing about the particular questions shown in the table. I am imagining that people were given some option of “not sure,” but then the percentages reported should say whether it is of the total responses or the total expressing an opinion one way or another. There is what I see as the relation to NHST – is the survey measuring dichotomous thinking, the bane of NHST as I see it?

  17. jim says:

    I really important point that hasn’t been brought up here:

    In statistics, what constitutes “evidence” for a given claim? I ask because it seems like erroneous claims are made all the time on the basis of statistical data. Is such an erroneous claim made “without evidence”?

    In our morning paper, I discover that “Latinos” die from COVID more frequently in our county than their proportion of the population. What is this “evidence” for? Is it “evidence” that Latinos experience racial discrimination? Is it “evidence” for racially-based inequality?

    It also implies that whites are less likely to die from COVID than their proportion of the population would suggest. Does this mean that whites are discriminating against other groups? Does it mean they retain some social privilege that others don’t have access to?

    No! The mere recognition of non-race-proportional occurrence of some phenomenon in a population isn’t evidence for anything, because the comparison groups aren’t equivalent! For example, in our county, Indian immigrants tend to be well educated, while “Latino” immigrants tend to be poorly educated. Hence, Indian immigrants are programmers while Latino immigrants are lawn mowers. Evidence for discrimination?

    So I guess my position is that, when a columnist says that “The impact of COVID 19 on Latinos reveals discrimination in our country”, I conclude that this is a claim without evidence.

    • Chris Blanchard says:

      That relates to a problem I have with a lot of atheist critics of religion. I am atheist myself, but I disagree fiercely with the likes of Richard Dawkins when they say there is no evidence for religion. There is evidence (and this relates to the disussion about evidence based medicine): All those visions, dreams, direct revelations, miracles and prayers answered are definitely evidence, so the argument has to be about its quality, and I don’t see that any of it is good enough. There are two starting points. The easy first is that where contradictory claims are supported by similar kinds of evidence (Christians talking to saints and Hindus talking to gods, for example), then one or both of them must be wrong – that requires a big philosophical premise, that the universe is consistent, but I will go with it. The second, and more difficult, is that bits of knowledge aren’t isolated, so the context for religious claims includes stuff like the conservation of energy, heliocentrism and the rest of modern, certain, knowledge, and both big sets of knowledge can’t work together. That is more difficult because some honest and capable people do blend the two areas (I don’t mean flat earthers and Inteligent Designers – they are just silly), so I am left with respectful disagreement.

      • Andrew says:

        Chris:

        Yes, I’m reminded of professional skeptic/author/buffoon Michael Shermer, who wrote of an event that “shook [his] skepticism to its core”—it was a story about an old radio that didn’t work, then briefly started to work again, then stopped working. We’re supposed to be nice to Shermer because he’s well connected—his book has blurbs from media-friendly academics like Steven Pinker and Amy Chua—but, jeez, if all it takes to shake your skepticism is a loose connection in an old radio, it makes me wonder how much of a skeptic you really are.

        In contrast, I give Dawkins credit for the depth of his convictions. You might fault his reasoning, but at least he doesn’t apply it selectively.

      • Kyle C says:

        I’m an atheist too, but the argument from “no evidence for God” is essentially like the argument there is “no evidence other people have minds like mine,” i.e., you’re all golems or robots. We reject the latter argument because … we just do. Otherwise our experience of the world would make no sense. I am forced to admit that other people have the same kind of experience of a deity. They observe the world, and there God is. God just is. Ultimately there’s no arguing. (Historical accounts of Acts of a Savior may be on a different footing.)

        • steven t johnson says:

          Science is inherently collective. Personal intuitions may prompt individual investigations, and inspire resistance to unwelcome conclusions. And testimony about these intuitions may be subjects of study. But in the sense meant, no, saying the world is run by an omnipotent, omniscient and omnipresent God who is dedicated to saving our souls from the consequences of the first two humans disobeying him really is not like thinking other people have minds. It’s hard to reconcile this supposed reality with the world we actually experience. But a universe without such a God does fit our experience. By contrast, it’s other people *not* having minds that makes the philosophical zombies hard to explain, as in how, and even more, why. Like simulations of the universe the zombies are hypotheticals, not evidence.

          The thing is, individuals are not generally called up to give a coherent model of the universe. They are not even expected to spell out all the fundamental premises of their model of the universe. It would be unreasonable to require such mental labors of Hercules from everyone. We should generally instead rely on authority. The problem is, no matter how comfortable people are with imagining religion to have authority (which I think is what their “experience” in making sense of the world comes down to,) there is not authority to religion. A prime reason for this is that it is not a collective activity but private feelings.

      • steven t johnson says:

        jim’s comment answers the problem it raises, about how one rates the conclusiveness or probative value, the evidenceness so to speak, of statistical correlations. Basically, you compare alternate hypotheses, each of which becomes a null hypothesis. Deciding the only null hypothesis is chance, is simply wrong. It’s true that chance as the null hypothesis may serve to falsify the hypothesis, but falsificationism overall simply isn’t the essence of science. That’s why experiments or statistical studies testing an astrological hypothesis isn’t science, despite being falstificationism. I don’t see how this is truly related to the issue raised in your response.

        Also, I have to say that I sort of disagree with how you formulate your first (easy) objection. I don’t think it is a big premise that the universe is consistent. There is no consistent scientific model of the universe, not even in fundamental physics, which currently uses two mathematically inconsistent theories. There’s a big “but” though. The notion of a consistent universe is a generalization drawn our experience, bolstered by the failure of centuries to give any coherent sketch of how an inconsistent universe could produce such experience. Miracles are such an alleged inconsistency entirely incompatible with daily life. Which is by the way why there is never any solid evidence of a rupture of the universe’s consistency. Except of course by the answered prayers scam, where all prayers are answered, with “NO” being the commonest answer. The experience of a consistent universe is probably the best refutation of radical epistemological skepticism—or simply, skepticism—especially varieties like solipsism or “maya.”

        And your “more difficult” point that religious claims and what we do know of the universe are inconsistent is refuted by the inconsistency of the scientific models of the universe. There is no methodological barrier to seeing how the religious model and the scientific model “fit” together. There isn’t any a priori reason to exclude one inconsistent model rather than the other. Relying on a priori principles we can prefer religion over science if we want, or we can include both, however inconsistent they are with each other. The thing is, one, “science,” is composed of facts about the universe, collectively confirmed by a wide variety of methods that in principle anyone can do, and the explanations of how these things work or change (aka theories,) relying on long-tested generalizations drawn from numerous cultures and numerous periods of time. Notably, the notion that the universe is ultimately consistent and we can therefore infer “Causes produce effects,” what we loosely call “laws of nature.” Part of the problem is accepting that “religion” isn’t even a thing. Sociologically, religion is to medicine as astrology is to astronomy. But “religion” may be a state sponsored ritual, or a system for redistributing meat sacrifices, or a land-owning organization centered on a temple, or a more or less non-breeding group of people in a group home (who may collect rents or clear forests or keep an inn for travelers,) or a Sunday social club or a national patriotic observance or a legal system regulating marriage and other sexual behavior.

        Or it may serve as a symbol of morality, conveniently left undefined. God as a personification of good is then by definition unobjectionable. Denying God then is as foolish as denouncing literature for being good. Or maybe like denouncing backgammon for having rules.

        The difficulty with Dawkins seems to be with the notion that true means a correct factual statement about what really is, how things really work, the way changes occur. Dawkins limits evidence to whatever serves to decide such questions. In that sense, there isn’t any evidence for “religion.” After thousands of years, this is not a bold claim. But it’s still difficult to accept.

  18. John Mashey says:

    Those who write serious articles about astrology, reincarnation, ESP, etc have a fine outlet in the Journal of Scientific Exploration:
    https://web.archive.org/web/20131120092014/http://www.scientificexploration.org/journal/articles.html

    Astrology & parapsychology coexist comfortably in that venue.

    It has 5 articles matching “astrolog”

    My favorite article is:
    An Empirical Study of Some Astrological Factors in
    Relation to Dog Behaviour Differences by Statistical
    Analysis and Compared with Human Characteristics
    http://web.archive.org/web/20150424120723/http://www.scientificexploration.org/journal/jse_21_2_braesch.pdf

    As per https://en.wikipedia.org/wiki/Society_for_Scientific_Exploration (SSE)
    it’s currently edited by parapsychologist/philosopher.

    It had 18 articles by Robert Jahn, of Princeton Engineering Anomalies Research (PEAR) Laboratory
    https://en.wikipedia.org/wiki/Princeton_Engineering_Anomalies_Research_Lab
    Jahn was VP of SSE until his death in 2017.

Leave a Reply