Skip to content

Which experts should we trust?

In a comment on our post, “Expert writes op-ed in NYT recommending that we trust the experts,” commenter DCE writes:

Perhaps this post can have a follow-up on “How do I choose which experts to believe?” While broadly, Pigliucci’s “Nonsense on Stilts” offers some good discussion, there is the real issue of ulterior motives in crafting messages. . . . How to pick your experts is a thorny meta-research issue.

Relatedly, Paul Alper pointed us to this New York Times article entitled, “Worried About That New Medical Study? Read This First. There’s more than meets the eye — here are some tips to help avoid confusion,” in which a physician and medical journalist, with training in biostatistics and epidemiology, offers the following false statement:

When a study or a journalistic publication says that a study’s finding was “statistically significant,” it means that the results were unlikely to have happened by chance.

And also gives some advice which seems questionable to me:

When it comes to study design, not all are created equal. In medicine, randomized clinical trials and systematic reviews are kings. . . .


Not all journals are created equal. . . . A good way to spot a high quality journal is to look for one with a high impact factor . . .

That NYT article also offers some more unambiguously good advice, though, such as, “take each study for what it is: information. Over time, it will become clearer whether one conclusion was important enough to change clinical recommendations. . . . One study isn’t likely to shift an entire course of medical practice.”

The larger question

The larger question is, if we can’t trust the experts, who can we trust? Or, if we can’t trust anyone, what can replace “trust” in our reasoning? No easy answer here. We’ve already discussed problems with putting trust in your friends. I think the only solution is to think of post-publication review as a way of life, including reputational incentives in some way.

Should you trust what I write? I don’t know. I try to make my reasoning clear (the “trail of breadcrumbs” linking data, substantive theories, statistical models, and conclusions) to make it easier for you to judge for yourself.


  1. Dean says:

    This might be more relevant now than ever.

    A heuristic that is reasonably good is to only *consider* trusting those that have both a lot to lose if their statements are false and a lot to gain if their statements are true. But this alone won’t do.

    And on the question of whether to trust what you (Andrew) write: One of the reasons why this blog is so great is because of how self-critical and questioning you are. If someone openly corrects/revises/questions their historical statements, then you can be quite confident that the statements that they *haven’t* revised are probably still reliable (or as reliable as the person is capable of being)! ‘Experts’ who rarely admit errors or welcome scrutiny of their views are very difficult to trust.

  2. I think this is such a conundrum for consumers [like myself] of medicine, statistics, and education more generally. I was alerted to the sociology of expertise through exposure to Frank Von Hippel’s auspices with respect to nuclear safety and non-proliferation. And I eagerly embraced the caveats that he shares in his excellent book Citizen Scientist. I leaf through that book every few days to be reminded that conflicts of interests and perverse incentives can drive science [poor and good science]. Most of us want to have faith in science as our well being depends on efficacious and safe medical treatments, for example.

    I actually think that the question of ‘what is science’ is even more germane today. And that is being borne out in the controversies that have been ensuing since the nineties over the clinical trials and observational methods. And more recently within statistics communities.

    The public should be able to evaluate the expert judgment as well. Here and there some independent experts have had some success in cultivating understandable knowledge for the public. But we are overloaded with so much information as it is.

    On Twitter, I try my best to Retweet the viewpoints that are cogent.

  3. Jag Bhalla says:

    Couple of short posts on this issue that may help a little in pondering this “no easy answer” big issue

    If No Brain Is Free Of Bias, What Can We Trust?
    Comparing economics and other fields and their “bias-balancing processes”

    How To Avoid Your Own Brain’s Biases
    “basic cognitive geometry applies: Unless what you’re pondering is small or well understood, multiple vantage points are advantageous”

    • My observation has borne out that knowledge of cognitive biases and logic may work efficaciously in some contexts. But the incentives are such that the knowledge is hijacked in ways that work against the best scientific queries. Despite efforts to debunk the worst of assumptions, flawed scholarship is cited in support of dubious conclusions. It takes persistence and courage to remedy this environment. But I think we might be at a tipping point where it is seen as a necessity to approach science as an entropy averting domain. Here I think that the laundering of uncertainty in legal cases has been a development that has so hindered the prospects for averting entropic decisionmaking.

  4. Dale Lehman says:

    Who can we trust? For those that have read much psychology, we know we can’t trust our own perceptions (cognitive biases ruin these). For those that have read much of this blog, we know we can’t trust pedigrees, citations, status (e.g., whether you teach at Harvard). For those that have any education or any experience living, we know we can’t trust politicians (and, in the end, who isn’t a politician). For those that have read much of the work on media (and social media, in particular), we know we can’t trust our circle of social contacts. I’d call that a crisis.

    On the positive side, perhaps it is an improvement that we don’t trust these things/people that we used to naively trust in the past (e.g., the good old 1950s). So, this may just be a new awareness of a problem that has always been with us. But the difference is that things are much more complicated now, due to the things that we now “know,” and the scale and pace of the impacts of decisions seems orders of magnitude greater than they used to be. I can’t help it: I see the world as a glass one-third full.

    • Martha (Smith) says:

      “For those that have read much psychology, we know we can’t trust our own perceptions (cognitive biases ruin these).”

      Well, can we trust the perceptions of the people who did the research and wrote the papers concluding that “we know we can’t trust our own perceptions (cognitive biases ruin these)? Maybe we sometimes can and sometimes can’t trust our own perceptions. (I hate to say it: More research is needed! But it needs to be good research. And who decides what is good research? …)

  5. Anoneuoid says:

    Nullius in verba…

    Science is about universally distrusting everyone, including yourself.

    • Anoneuoid says:

      If you are going to use the argument from authority heuristic, do not trust people who have a track record of getting everything wrong. Eg, the WHO or anyone who told you not to wear a mask a few months ago and now says the opposite.

      Trust people with a track record of correct predictions and impressive feats you can verify on your own. People trusted Archimedes after he built a device to pull a ship out of the water on his own, they trusted astronomers after Haley’s comet returned at very near the predicted date, etc.

    • Joshua says:

      Anoneuoid –

      > Science is about universally distrusting everyone, including yourself.

      While I get where you’re coming from, I’m not sure that thinking you’re going to distrust everyone (including yourself) is any more practically beneficial (or realistic) than trying to decide who to trust.

      It’s really hard to distrust everyone, most importantly yourself. And if you could do it, I’d say that way madness lies. Better, IMO, is to recognize the biases we all have in our trust-building processes, and seek to inform them rather than to eliminate them. The basic scientific method is an important tool in that regard. So is good faith engagement with others who are interested in good faith exchange.

      > Do not trust people who have a track record of getting everything wrong. Eg, the WHO or anyone who told you not to wear a mask a few months ago and now says the opposite.

      That’s the kind of erroneous thinking that is the mirror reflection of the problem with trying to determine who to “trust” as opposed to trying to decide how to evaluate probabilities.

      First, after saying that we should distrust everyone, including ourselves, you begin to whittle that back to distinguish not universally distrusting everyone but distrusting differentially. Not to say that a track record shouldn’t be “information” for evaluating probabilities – but neither should it be some kind of universal rule to be applied – such as you did. And, of course, that’s on top of your (IMO) rather ridiculous statement that the WHO “get[s] everything wrong.” For the case in point, if the WHO advised against wearing masks with the information they had at the time that doesn’t necessarily mean that we should just disregard their later advice to wear masks after more information became available. Your heuristic could be extremely problematic. Past behavior (or advice) can and should inform probabilities that current advise is correct, but it shouldn’t be simply “distrusted” because advice changed or because previous advice turned out to be erroneous.

      • Dale Lehman says:

        +1000 to your last paragraph. There is so much wrong with the statement that the WHO “get(s) everything wrong,” and even more with the example about advice concerning wearing masks. The mask issue was partially out of concern with ensuring that masks would be available to those most in need (front line workers) and not hoarded by the general public with less need. Beyond this, I tend to have more trust in people that have changed their mind as more information becomes available. Gauging trustworthiness by the trait of never changing your mind seems one of the poorer ways to decide who to trust.

        • Very good points Joshua and Dale.

        • At the moment, the healthcare industry is hoarding masks and because we’re “not allowed” to buy N95s in the general public, the demand appears to be met, and so we’re not making more… this is utmost stupidity.

          We have the capacity in the US to manufacture something like tens of millions of N95 masks every day, we should have been running those machines full capacity from about mid Feb onward, which would leave us with something like a few hundred million masks available at the moment. Instead, people are out there sewing cloth masks like mad when N95s in normal times cost something like $1.50 each and for the general public they’re easily reusable after leaving them in a bag for 3 days. Also, it’s been shown that you can easily sanitize and reused up to 50 times with full effectiveness by heat treating in a dry oven for 20 mins at 180F or so. So hospitals and first responders could be easily reusing these multiple times. Stupid, stupid, stupid, all around. Bad economic policy (ban on sales to general public) bad health policy, bad everything.

          • Joshua says:

            Daniel –

            > bad health policy, bad everything

            Agreed. The lack of testing and lack of contact tracing are even more dumbfounding to me. Every day I shake my head and have trouble wrapping my mind around the current state of affairs. How did it get his bad?

            I have yet to see a good explanation why there isn’t a national policy of rolling out massive amounts of rapid tests which, while less accurate, could be administered repeatedly. The utility of tests that take 5 days or more is significantly reduced. Also I don’t get why we don’t have a widespread policy of pooled testing.

            It’s unreasonable to expect perfection, or to think that what makes the most sense to me should be public policy, and maybe there are good explanations for the policies we’re following; but in lieu of seeing those explanations this is all very, very frustrating.

            Nothing we don’t already know, but a reasonable overview here:


          • jim says:

            Problem: “so we’re not making more”
            Solution: allow the price to float.

            amazon and ebay and other cos and even states are taking it upon themselves to screw the public by controlling prices. Had the prices been allowed to rise initially, there might be a flood on the market by now.

        • yyw says:

          I find it disingenuous to cite mask supply concern as the reason for not recommending mask. That was not what the public was told back in Feb/Mar. If mask supply is the concern, communicate it, along with all the caveats about uncertainty about effectiveness and alternatives like homemade cloth masks etc. Changing mind is fine, what is not fine is to go from “masks don’t work” to “masks work”.

          • yyw says:

            If public health officials have said “we don’t know” and “we are not sure” more, the trust in them would be higher. For mask wearing, one would argue that even back in Jan/Feb, a reasonable belief would be that it likely does more good than harm. At the very least, say something like, “we are not sure if masks work” instead of “marks don’t work”, which to the lay public means “we are sure masks don’t work.”

        • Anoneuoid says:

          Beyond this, I tend to have more trust in people that have changed their mind as more information becomes available.

          New information didn’t change anyone’s mind. The recommendation was originally a lie, according to the people who made it.

          “We were concerned the public health community, and many people were saying this, were concerned that it was at a time when personal protective equipment, including the N95 masks and the surgical masks, were in very short supply,” Fauci told The Street. “We wanted to make sure the people, namely, the healthcare workers, who were brave enough to put themselves in harm’s way to take care of people who you know were infected with coronavirus, and the danger of them getting infected. We did not want them to be without the equipment they needed.”

      • Anoneuoid says:

        It’s really hard to distrust everyone, most importantly yourself. And if you could do it, I’d say that way madness lies.

        It just means you want as many other people as possible to look for logical errors in your argument and try to duplicate your methods. Don’t see what is so hard about it. Distrust everyone, including yourself, is almost the definition of pure science.

        First, after saying that we should distrust everyone, including ourselves, you begin to whittle that back to distinguish not universally distrusting everyone but distrusting differentially.

        Due to lack of resources, many times we have to fall back on fallacies/heuristics like argument from authority. It is not possible to be an expert in everything. So even though we know it is based on a fallacy, still we often need to trust someone.

        For the case in point, if the WHO advised against wearing masks with the information they had at the time that doesn’t necessarily mean that we should just disregard their later advice to wear masks after more information became available.

        I don’t even remember the WHO recommending anything about masks. I remember a lot from the US government health experts about masks not doing anything when the general public wears them though, which has now been 180’d. Then later they came out saying they were lying for the greater good.

        The WHO has been wrong on

        1) Human-to-human transmission
        – At the time Taiwan was warning it was transmitted this way but the WHO ignored them for political reasons

        2) Nosocomial transmission
        – Obviously possible from human-to-human transmission

        3) The high percent of mild/asymptomatic cases
        – Already known from the diamond princess and easily inferred from big 70k patient study from china showing very low prevalence in young people back in mid/late Feb.

        4) The low percent of smokers
        – Was pretty obvious from evidence back in late Feb, clinched when cameron kyle-sidell revealed the odd similarity to high altitude sickness in late March.

        5) The use of “early intubation”
        – The WHO recommended “do not delay intubation” based on no evidence.

        6) Whether antibodies confer immunity
        – They claimed “no evidence antibodies confer immunity” when there are probably tens of millions of papers about antibodies conferring immunity. I never thought this is something that needed to be mentioned.

        Still to come:
        7) IV vitamin C for the severe patients.
        – Has been obvious since at least early Feb when we saw it was ARDS:

        Still, very little has been published on vitamin c:

        8) HBOT
        – Has been obvious since the link to high altitude sickness was pointed out:

        Only positive news is coming out about this:

        9) A kind of oxygen toxicity, reperfusion injury, reverse altitude sickness when patients who have been hypoxemic for days are suddenly blasted with supplemental oxygen (vs gradually increasing it). This is like ventilators 2.0
        – Was pointed out as early as mid April:

        10) Excess social distancing, sanitation, etc stops the spread of many infectious pathogens, not just SARS2
        – If this goes on long enough there is going to be a huge problem when people are exposed to normal life again. Even for covid, the antibody and t-cell mediated cross immunity from common cold coronaviruses held by ~50% of the population may be lost.

        > Public health measures intended to prevent the spread of SARS-CoV-2 will also prevent the spread of and, consequently, maintenance of herd immunity to HCoVs, particularly in children. It is, therefore, imperative that any effect, positive or negative, of pre-existing HCoV-elicited immunity on the natural course of SARS-CoV-2 infection is fully delineated.

        I’m sure I missed a few. I don’t remember the WHO saying anything about waning antibodies, but maybe they did.

      • Anoneuoid says:

        I responded to this but it’s still stuck in the spam bin.

  6. Joshua says:

    Andrew –

    > what can replace “trust” in our reasoning?

    I think that the question of “Who can we trust” isn’t well-posed – as such, I think the question of “What can replace ‘trust’ in our reasoning” is more on point.

    IMO, we shouldn’t be looking to “trust” – to the extent that it implies some kind of suspension of disbelief.

    We should be looking to weigh probabilities, as best we can, with an understanding of (1) the tendency we all have towards “motivated reasoning,” (2) what our likely biases might be (and how engaging with others can help us to see them) and, (3) probably the most difficult of these three considerations, how to evaluate risk – particularly in high damage function/low probability circumstances in the face of uncertainty – and particularly if the risks play out over long time horizons.

  7. The question of how to determine trustworthiness is an old one. J.S. Mill (“On liberty”) said the following:
    “In the case of any person whose judgment is really deserving of confidence, how has it become so? Because he has kept his mind open to criticism of his opinions and conduct. Because it has been his practice to listen to all that could be said against him; to profit by as much of it as was just, and expound to himself … the fallacy of what was fallacious.” This is a starting point for current research on what I call actively open-minded thinking as a standard. The standard applies not just to individuals but also to institutions, like science, or different styles of journalism.

    An example of some research inspired by this line of thinking is this poster:
    (This is only part of what needs to be done. And it contains some minor statistical errors that will be corrected eventually.)
    There is also a lot of work on the relation between AOT and acceptance of fake news, conspiracy theories, etc., e.g.:

  8. Peter Dorman says:

    This is a problem I have to deal with not only in matters of science/policy/etc. but also everyday decision-making. For instance, yesterday I had to do some quick research into the environmental health aspects of what kind of flooring to put into a bedroom. I have snippets of toxicological knowledge in my brain from working on the economics of public health, but that doesn’t go very far. So what do the “experts” say?

    Looking back, I think I followed three guidelines:

    1. Try to filter out the “non-serious” sources. A source is serious if they have little obvious incentive to sell you on anything, they are sufficiently credentialed in the topic area, and they express themselves in a way that shows they are open and responsive to counterargument. That’s a first cut.

    2. If there’s a difference of opinion, take note of what seem to be the key issues, and filter again on the basis of whether the sources are engaging with them.

    3. In the end, treat the views of these sources as data points. It’s not a binary believe-reject situation. Attach a provisional subjective probability to a viewpoint, which can be combined with other sorts of information or altered as new viewpoints arise.

    From a what-is-science point of view, I suppose foregrounding areas of dispute and the use of argumentation that takes doubt seriously is the most relevant.

  9. Zhou Fang says:

    Now I’m thinking back and wondering if I have myself used the “unlikely to have happened by chance” phrasing before. :(

    • Ben S. says:

      What exactly is wrong with that phrasing? That it leaves out the qualifier “assuming the null hypothesis”?

      • Andrew says:


        The statement, “When a study or a journalistic publication says that a study’s finding was ‘statistically significant,’ it means that the results were unlikely to have happened by chance, assuming the null hypothesis,” is also false. Assuming the null hypothesis, the results did happen by chance, by definition. The correct definition is something like, “it means that it was unlikely to see results as or more extreme than the data, if the data were really produced by a hypothesized chance process.” Statistical significance is never telling you how likely it is that the results happened by chance. Indeed, it’s not really clear what that statement would mean.

        • The NHST result doesn’t tell us anything about what happened, it tells us about the “null model” of what might happen in the future…

          if the p value is small, it tells us that assuming the null model is the actual generating process, in the future we will rarely get similarly unusual data.

          if this seems often useless, it’s because it is:

          Driver: “I have a flat tire”
          NHST mechanic: “Assuming your tires are nearly indestructable, this will rarely occur in the future p = 0.0027”
          Driver: *backs slowly away*

          When the “null model” is actually some well developed model, such as a fit to a long history of real events, then it’s useful because it tells us that the model maybe doesn’t apply to this current circumstance even though there is a long history of it making sense in real world cases.

          Driver: I have 3 flat tires at all once
          NHST Mechanic: In my experience with tens of thousands of customers, this almost never happens when the road was clean and the tires were not sabotaged p = 0.00022, I’m guessing the road had some debris, or someone slashed your tires.

  10. Jordan says:

    One way to work through judgments of who to trust is to focus on the extent to which the other person works to remove the need for trust in the first place. For example, programmers and researchers who make their data/code available for the public to review, news articles that provide links to the actual studies they’re describing, etc. I know these are all things discussed here before, but what I’m trying to get at is that there are some entities that make it plainly obvious that they don’t WANT you to need to trust them. When an expert or some other authority has done everything they can to remove the need for anyone to trust them, I tend to trust them more. Likewise, anyone that utters the words “trust me” or “believe me” tend to raise red flags and maximum suspicion.

    • focus on the extent to which the other person works to remove the need for trust in the first place.

      +1 to that. Even with code and raw data (or worse, transformed data) available, it can still be difficult to get to that level of trust because people don’t leave the appropriate trails of bread crumbs in their code to get from their data to their analyses automatically, or at least with automated steps. Or the code’s so convoluted you can’t read it (writing code is as hard as writing prose clearly, if not harder given all the constraints).

      My parents really believed in fact checking. They sent me to the reference books all the time; I’d get lost in the World Almanac or last year’s baseball stats rundown, so maybe they were just trying to get some peace and quiet. It didn’t matter whether it was a discussion of what Johnny Bench did last year or the meaning of a word or the location of a country I’ve never heard of. They applied the same standards to themselves. Along the same lines, I was never told to salute the uniform and not the wearer, so I’ve had an iffy relation to authority and being asked to trust people in power my whole life. For example, I distrust the modern executive style (seen in deans and university presidents these days), where any question is met with a politically aimed diversionary tactic rather than an answer. I distrust the people behind those answers instantly, while still respecting the skill with which I’m being deflected.

      • Jordan says:

        Agreed, making code and raw data available are insufficient, but consider how a researcher might respond to questions about their convoluted/confusing code. The researcher could just ignore the questions (i.e., “just trust me”), or he/she could work to help people understand the code and why the particular analytic approach was chosen. Of course, that extra work wouldn’t be needed if the code was well-written and thoroughly-commented to begin with, but coding is difficult and people can be forgiven for failing to achieve perfection. What matters to me is the response of the researcher. If he/she chose to incur extra costs to address the confusion caused by their crappy code, that would signal to me that the researcher truly wants people to understand what they did and why they did it. Combine this with a history of choosing tactics of “trustless-ness” (e.g., history of pre-registering studies and analytic methods) and you get a general picture of someone you can probably trust *if they are ever in a position where they need to request others to trust them*.

        I guess my point less that we should only trust people who engage in X, Y, and Z behaviors and more that we should trust people who clearly tend to prefer “trustless” solutions over “trust me” solutions. I trust people who see requests to be trusted as requests for a huge favor. We might be willing to let a friend borrow our car when we know that their car is in the shop and it’s the first favor they’ve requested in years. But imagine another friend who wanted to borrow your car all the time even when they had a working car of their own (Maybe they like your heated seats?). That second friend is quickly burning through a scarce resource (i.e., your good will toward them) when you can clearly see that they don’t need to be doing so. At best, they’re being petty.

        Asking me to trust you is like asking me if you can borrow my car. Why you’re asking for the trust matters a lot. If there are easy, low-cost, trustless solutions available and you have a known habit of avoiding those solutions, then I’m going to be very suspicious. Contrast that with the person who asks me for my trust as if they believe it is their last resort. Perhaps they even seem a little ashamed because they failed to find a trustless solution (e.g., they can’t share a dataset because it contains confidential information). In these cases, I’m much more inclined to trust someone, especially if they have a history of treating trustless-ness as the ideal.

        Its kind of paradoxical if you think about it: The best way to become a trustworthy individual is to go out of your way to avoid ever actually needing people to trust you.

    • Andrew says:


      Your comment reminds me of the classic line from Dan Davies, “Good ideas do not need lots of lies told about them in order to gain public acceptance.”

    • jim says:

      “focus on the extent to which the other person works to remove the need for trust in the first place.”

      It’s not safe to assume that a reference is a sign of honesty or integrity.

      Lots of people who want you to trust them without looking have the same idea: put in the links and most people won’t bother, or they won’t get it even if they bother.

      We’ve already seen that there are lots of studies where the researchers provide their data, but the data is still poor and the analysis is still poor. So it’s not safe to assume that a reference is a sign of honesty.

  11. Martha (Smith) says:

    Andrew said,
    “The larger question is, if we can’t trust the experts, who can we trust? Or, if we can’t trust anyone, what can replace “trust” in our reasoning? No easy answer here.”

    I think you may be falling into the trap of not giving uncertainty its due. We need to think not in the dichotomy of “can trust” vs “can’t trust”, but more in terms of “to what degree can I reasonably trust this statement/this person/this claim?”

  12. I am a little wary of the use of the term ‘reasonably’. That strikes me as a low bar given that special interests hire these product defense firms are skilled at making very convincing cases that can even elude scientists because of the socialization and training that they undergo throughout their lives. We need more investment in public interest health for sure. We have very talented public health officials and academics that through persistence have had some success in raising the bar of research.

    It’s in the legal environment where uncertainty is used to the advantage of corporations. I’m only now getting somewhat of a handle on the use of statistics in the law. Thanks to some forums I attended here in DC.

    • Martha (Smith) says:


      You said, “I am a little wary of the use of the term ‘reasonably’.”

      I’m not clear whether or not this comment is in reference to my comment “We need to think not in the dichotomy of “can trust” vs “can’t trust”, but more in terms of “to what degree can I reasonably trust this statement/this person/this claim?””

      If it is: In my comment, I take “reasonably” as a subjective judgment of the individual judger, and expect that this judgment may vary from judger to judger. Also, I make the statement with the assumption that the judger is thinking carefully in making their judgment.

      • Martha,

        The cases that I’ve come across where ‘reasonably’ is used strike me as leading an argument to a predetermination of evidence and conclusions. I try to avoid using it. Personal preference.

        You are correct it is a subjective judgment varying from judger to judger. And I wasn’t questioning your judgment either.

  13. yyw says:

    Distrust anyone that does not communicate uncertainty explicitly on any subject unless the evidence is beyond reproach. Re mask wearing, any scientists that did not convey uncertainty on effect size, likely side effect (overly inflated sense of security, etc.), applicability to covid-19 whose transmission mechanism is still not known precisely, etc. cannot be trusted. They either subscribe to binary thinking themselves or think that we are too stupid to process anything beyond a binary recommendation.

    • My observations of mask-wearing on the street show that many don’t wear the masks so securely. But hey I don’t wanna be lambasted for not wearing one.

      I think social distancing and handwashing make more sense to me.

      I am trying to boost my immunity. And not always sure that any one strategy is the ultimate or correct.

  14. Eli Rabett says:

    You ask the wrong question. The right one at times of uncertainty is how do we determine who are the grifters whose opinions we should trashcan?

    Posing the question this way makes it much easier to answer

    • Andrew says:


      “Grifter” is a bit vague. For example, consider someone like Cass Sunstein or Steven Pinker. Are they “grifters”? I’d say no, they’re just too trusting of credentialed academics. How about the critical-positivity-ratio researchers we discussed yesterday: are they “grifters”? In a way, sure: they’ve ridden pseudoscience to fame and fortune. But I’m guessing they’re also just gullible, believing things that support their views. I’m not sure where to draw the line. Even flat-out frauds like the pizzagate guy and the disgraced primatologist are probably true believers in their own theories.

      • Eli Rabett says:

        Let Eli start over. The question was about which experts to trust Eli’s point was that it is easier to spot the experts NOT to trust and in a confused situation that is very helpful. If you want a parsing rule it’s hard to beat d-squared’s rule that fibbers forecasts are worthless.

        Real experts learn whose work not to trust. That journalists purposely eschew such sort of knowledge in building their view from nowhere makes for big troubles. Find someone who has been untrustworthy in the past and steer clear. Be very wary of an expert flying in from somewhere else. They lack the background knowledge.

        Amateurs are prone to get caught in the traps that entangled the professionals’ grandfathers, and it can be difficult to disabuse them. Especially problematical are those who want science to validate preconceived notions and go expert shopping, and those willing to believe they are Einstein and the professionals are fools.

  15. Michael J says:

    I think a good heuristic is basically to trust in the group rather than individuals, with several important caveats. This is assuming, of course, that the topic at hand is something you yourself are not an expert in. In that case, you would just evaluate the evidence yourself and come to your own conclusions. But for the rest of us, following the group seems to be a safe strategy because experts are somewhat independent of each other (in the statistical sense) so the likelihood of them all being wrong can reasonably be estimated to be low. The problem, of course, is that they’re not actually independent so you gotta evaluate the dependence and see if there’s anything that could cause them all to be wrong.

    Then there’s the problem of heterogeneity in expert opinion. Take aerosol transmission. Some epidemiologists are adamant that it really isn’t a big deal at all. Others believe that it could be possible. So what should you do if you yourself know nothing about virology or aerosols or things like that? Like others in this thread mention, I think the answer is to embrace uncertainty and input that into your decision function.

    So I guess my answer is don’t trust any individual experts if you don’t have informative knowledge about them and instead rely on broad groups. And, as always, embrace uncertainty.

    • Andrew says:


      Sure, but if the broad group is “Ivy League psychology professors who’ve published in top journals,” then you could have problems. Sloppy members of this group are busy spending the reputation that was built off of decades of hard work by more careful members of the group.

      As you say, a key problem in this evaluation is dependence among the experts.

      • Michael J says:

        Is that a broad group though? I meant more like “most psychology professors” or even “most Ive League psychology profs”. The examples/criticisms of some social psychology you’ve pointed out are more one-off things, no? Like even before the whole replication crisis was pointed out, would most psychology researchers have backed up that power pose paper? Maybe they wouldn’t have criticized it but that’s different from agreeing with it or advocating for it.

        Hmm actually as I write this I’m finding that it’s probably often difficult to gauge what the consensus or popular view is among experts. Journalists don’t always pick and choose experts who are representative of their fields. A layperson could easily think that the WHO, as a well-known and international health organization, is representative of the medical/epidemiological field so when they say aerosol transmission isn’t a big deal then that’s what experts generally believe.

        Yeah so I guess I’m debating myself in a circle back to square 1. This is hard.

        • Andrew says:


          I guess you could label the broad group as “the academic psychology community,” which publishes lots of good stuff and gives awards to lots of solid researchers but also has promoted a lot of really bad stuff too. I think that, for many years, journalists just assumed that the academic psychology community knew what they were talking about. They didn’t realize that this community was perfectly willing, as a community, to promote bad work by people who were well connected.

          Nothing unique about psychology, of course. That’s just an an area that’s accessible so we can see the problems clearly.

  16. rm bloom says:

    “Epistemic dependence: I find myself believing all sorts of things for which I do not possess evidence….The list of things I believe, though I have no evidence for the truth of them, is, if not infinite, virtually endless.” [Article by Professor Hardwig]

  17. jonathan says:

    In what context? A context which is developing rapidly contains more uncertainty, so experts can be wrong or wrong enough that it matters to you. In an uncertain environment, even if you fall back on your training and trust commanders (in your field), they are still more likely to be wrong because it’s an uncertain environment.

    It’s hard to think of examples where uncertainty didnt require unanticipated adjustment. It took a while for the Union to figure out its strategy, and part of that figuring included Grant emerging. We fired a lot of people in WWII, particularly in the Navy, not because they were incompetent, but because the war wasnt the right fit.

    With covid, today I read a Fed member say we should shut the country down because then we could really open the economy while keeping the virus in check. I think that is incredibly wrong: he gets the concept of a virus wrong and he doesnt realize that a shut down sends the signal that you should stay shut (save the cost, because you cant trust you wont be shut again). My point: in an uncertain environment, I dont trust any expert. I dont trust any finding until it emerges as true over long enough that it seems likely to remain materially true. I dont trust any estimation of materiality because those estimates are most likely to be wrong, as opposed to relatively simple observations or similar derivations.

    The problem with uncertainty is that it’s uncertain. And that uncertainty attracts guesses. That to me is crucial: if you’re are talking about ‘science’ in an uncertain environment, you can have relatively true factual level findings that turn out to be immaterial. So trust in the process by an expert cant address the uncertainty at the higher level where it really becomes another guess.

  18. Terry says:

    The larger question is, if we can’t trust the experts, who can we trust? Or, if we can’t trust anyone, what can replace “trust” in our reasoning?

    You don’t look for experts you can trust, you look for experts that are convincing.

    I’ll say it again. You have to let both sides speak. Then you listen to both sides and see who is more convincing.

    • John N-G says:

      Listening to both sides is also essential to one of my guidelines: Trust the one who presents the opposing argument in the best possible light.

      Not coincidentally, this is similar to a guideline for good science paper writing: Present the opposing argument in the best possible light.

    • Jonathan (another one) says:

      Exactly right. There are no shortcuts here, unfortunately. Both sides have to explain to you, using concepts you can understand and referring where necessary to models and literature why they are right. Then *you* have to make the final judgment. You aren’t trusting anybody. You’re letting each of them make their best case, whatever it is, and assessing the relative merits of the two cases. This is really, really hard, which is why people want shortcuts. There aren’t any.

    • Martha (Smith) says:

      Terry said,
      “You don’t look for experts you can trust, you look for experts that are convincing.

      I’ll say it again. You have to let both sides speak. Then you listen to both sides and see who is more convincing.”

      This may be OK for a one-off situation, but in the long run, it is important to consider the “track record”. For example, someone might seem convincing in their reasoning in a particular situation, but then later you find that there are holes in their reasoning. This experience should lower the trust you put in that person’s reasoning.

  19. Dan F. says:

    One of the fundamental goals of education is to teach the student how to decide what authorities to trust.

    That someone is considered an “expert” by some collective might be part of the considerations used in making such a decision. The problem comes when it is the entire basis of such a decision.

    In general we grossly overestimate the usefulness of validation by institutions in part because our education does not teach us how to deal with situations of great uncertainty. In general one has no idea who to trust (if anyone) and one is not well equipped to make decisions in such a context.

  20. MaximB says:

    The obvious question that follows from “Which experts should we trust?” is
    “What are the consequences of my trusting one expert or another?”.
    If if doesn’t affect my decisions, it doesn’t matter too much.
    If I need to take a decision based on expert judgment, it’s a different thing.

    I agree with Peter Dorman’s point 3.
    But let’s take, for example, statins, a controversial subject on which I don’t have a definitive opinion.
    Most of the experts are pro statins, and as usual, whenever the expert opinion favours
    the interests of the industry, their voices are amplified, while their critics have less impact.
    I’ve read lots of opinions pro and con, browsed through study results,
    and after weighing evidence, I tend to lean on the side of their critics:
    (hidden adverse events for longterm use, questionable efficacy etc).

    But I’m not a doctor. So, several situations could arise:

    1) A doctor prescribed statins to my mother
    2) A doctor prescribed me statins
    3) Somebody would tell me doctors prescribe statins to their patients

    In case 3 I’d just say they are wrong. But in case 2, would I take them?
    I don’t know, maybe it would depend on how convincing the doctor is and how bad I’m feeling.
    In case 1, would I contradict the doctor and try to stop my mother for taking them?
    I don’t think so.

  21. DCE says:

    As I have noted elsewhere, the frequency with which the phrase “I don’t know” explicitly appears in Andrew’s writing (IMHO, extremely high) is perhaps the greatest contributor to my confidence in the balance.

  22. jim says:

    Why are we even asking this?

    “Trust” is what you put in your friends and family and partners. It has no place in public communications.

    The job of the public communicator is to *demonstrate that their claim is true*. No trust required.

    Here’s the problem with this “trust” thing: most people aren’t intentionally lying when they make statements that others perceive as false or misleading. What they *are* doing is unintentionally expressing their personal bias. I can tear every Krugman column to shreds. I’d love to debate Krugman. But he’s not lying. He’s presenting reality as he understands it. And there is every incentive for public communicators to understand reality in a way that benefits them. That’s the whole reason they’re communicating to the public in the first place.

    So your job as a citizen when reading public statements by talking heads and pundits is to forgo the trust shortcut and deploy the critical thinking skills you’ve supposedly been given.

  23. Jai says:


    As long as you’re splitting hairs about the medical journalist’s interpretation of the term “statistically significant,” wouldn’t it be more careful to assign it a degree of truth rather than simply calling his statement “false?”

    Likewise regarding his praise for RCTs and reviews.

    While I can understand reasonable debate over these value judgements between scientists, I think the target audience for that op-ed was the typical lay reader of news, and I think the medical journalist’s guidance would help people dismiss misinformation like the “Plandemic” documentary or the anti-science of America’s “Borderline” Doctors (as I call them).

    • Anon says:

      Do we really want a society where people ‘dismiss misinformation’ out of hand, without even stopping to consider it?

      That seems really scary to me. If you have a society where people blindly trust the experts, and are explicitly told not to think for themselves, then all you need is that those few anointed get deceived once. “The experts might be wrong” is not a far-fetched theoretical concern, as I’m sure you know from reading this blog.

      If you have a society where anyone, crackpot or “respectable scientist,” gets an unlimited supply of rope, you will have some people believing stupid things. But what’s the alternative? To silence the crackpots, just because our opinion of the man on the Clapham omnibus is so low?

      In the end, I think openness is better. People will maybe believe silly things more often, and maybe the total amount of correctness in the world will be lower, but that’s not what’s dangerous, what’s dangerous is when everyone is wrong at once.

      Quite frankly, this article reminds me of Proverbs 3:5:

      >Trust in the LORD with all your heart
      >And do not lean on your own understanding.

    • Andrew says:


      “Splitting hairs,” huh?

  24. Anon says:

    “Careful contrarianism” is a good approach: Look at times when the consensus has been wrong, then look at who was questioning it at that time. If they are questioning some other orthodoxy, that’s worth looking into. But if they’re just questioning all the orthodoxies, all the time, odds are they’re not that trustworthy after all. That doesn’t mean they’re not worth looking into, however! Alex Jones may be a kook, but you can always trust him to give you the “other side of the story,” right or wrong.

    More generally, I think people should have an approach to epistemology that’s more adversarial than inquisitive. Instead of trying to dig into the matter yourself, try to find a scenario where you have two sides vocally disagreeing with each other. Then you can look at only the points of contention, and ignore the large amounts of bulk/routine stuff that isn’t in dispute.

Leave a Reply