That “not a real doctor” thing . . . It’s kind of silly for people to think that going to medical school for a few years will give you the skills necessary to be able to evaluate research claims in medicine or anything else.

Paul Alper points us to this news article by Abby Phillip, “How a fake doctor made millions from ‘the Dr. Oz Effect’ and a bogus weight-loss supplement,” which begins:

When Lindsey Duncan appeared on “The Dr. Oz Show” in 2012, he was introduced as a “naturopathic doctor” and a certified nutritionist. . . . But Duncan wouldn’t necessarily know anything about the chatter in the medical community, because he is no doctor. In announcing a $9 million settlement with Duncan this week, the Federal Trade Commission more accurately labeled him a “marketer” . . .

The entire story is fascinating and horrifying (especially in light of recent news regarding who’s in charge of our public health system), but what I want to focus on is a particular aspect of this story, which is the idea that this guy is not a real doctor.

What this guy was doing on TV was promoting a bogus health supplement, “green coffee bean extract.” What he was nominally doing was providing health advice, sharing information about the effectiveness of this supplement for weight loss.

Here’s my point. Set aside all the fraud for a moment and suppose this person was presenting legitimate medical research. In what way would it be relevant that he was a doctor, a physician, a legitimate M.D., etc.? Doctors are trained to treat patients. They’re not trained to evaluate research claims. Sure, in medical school I guess they get some lectures on statistics or whatever, but an M.D. or similar degree is pretty much irrelevant to evaluation of claims of effectiveness of a health supplement. Really, a marketer would be just as good.

So, yeah, I’m glad that this guy has to cease and desist, but I feel like the whole not-a-real-doctor thing is entirely irrelevant. It’s relevant that he said he was a doctor, because that was a lie about his professional credentials, which is relevant to evaluating any other claims he might make in this capacity. But, to me, the bigger problems are that: (a) it seems that you need to be labeled as a doctor to have credibility when making medical claims, and (b) it seems that, once someone’s labeled as a doctor, his claims are taken seriously by default. (This is similar to the problem of the credulity rut that we discussed recently, with economists and journalists taking crappy regression discontinuity studies seriously by default.)

Dr. Oz

Abby Phillips’s article continues by slamming Dr. Oz, the surgeon and TV host:

Their moneymaking scheme, however, was only possible with the help of Mehmet Oz’s increasingly maligned self-help show. . . . During the taping, Dr. Oz nodded along with Duncan’s pseudoscience gibberish, according to a transcript that was included in court documents. . . . There was then, and there is now, no scientific evidence that green coffee bean extract promotes weight loss. . . . Between the taping and the time the show aired, Oz’s producers e-mailed Duncan to ask whether he had a preferred green coffee extract supplier. . . . Meanwhile, “The Dr. Oz Show” is still on the air, despite his association with numerous debunked weight-loss products. A recent study found that half of the medical advice he dispenses is baseless or wrong. . . .

Dr. Oz is, of course, a real doctor. But so what?

It’s kind of silly for people to think that going to medical school for a few years will give you the skills necessary to be able to evaluate research claims in medicine or anything else.

Lots of doctors can do this sort of thing—lots of them are indeed excellent at evaluating research claims—but if so, I don’t think it’s the medical training that’s doing it. The medical training, and their practice of medicine, gives them lots of relevant subject-matter knowledge—I’m not saying this is irrelevant—but subject-matter knowledge isn’t enough, and I think it’s a big mistake when media organizations act as if an M.D. is a necessary or a sufficient condition for evaluating research claims.

P.S. Dr. Oz works at Columbia University, just like me! And Columbia’s proud of “the straight-talking guy in the blue scrubs.” As the alumni magazine put it in their celebratory article, “That’s Healthfotainment!”

P.P.S. Just to clarify: I’m not saying that statisticians are better than doctors. Above I wrote that I don’t like the “not a real doctor” thing. I also wouldn’t like a “not a real statistician” thing. When evaluating medical research, I’d definitely like people with medical training to be involved. Here’s an example: I would never have been able to have done this project on my own.

So please don’t take the above post as anti-doctor. Not at all. An M.D. is neither a necessary or a sufficient condition for evaluating research claims. The same goes with a Ph.D. in statistics or anything else.

68 thoughts on “That “not a real doctor” thing . . . It’s kind of silly for people to think that going to medical school for a few years will give you the skills necessary to be able to evaluate research claims in medicine or anything else.

  1. A mild generalization of a lot of what you write on this site is that trust can’t be mechanically generated. People don’t get automatic trust by being credentialed; papers don’t get automatic trust by being peer-reviewed; journals don’t get automatic trust from their names. Trust is really hard to accrue!

    • I agree. This is nonetheless a delicate subject to address, as the strategy of “dismissing credentials” is widely spurred within pseudoscience to criticize scientific knowledge and liken experts (at least “normative”, degree-holding experts) to proponents of opinions instead of messengers of verifiable information, and thus we eventually get to the point where everything is debatable.

      In the case of providing “expertise” on any subject, however, the expert’s knowledge about the information can be infinitely incremental (from a low-level, hazy knowledge of the field up to having directly observed and analyzed the raw data) and the type of information being commented on (even from the same field) can also influence the degree to which the expert is capable of providing a valuable assessment, so the credential problem ends up requiring us to set an arbitrary threshold to where we consider the expert to be qualified to evaluate the information.

      I believe the problem here has more to do with the general public’s opinion being deceived as to the competence of MDs regarding the evaluation of scientific information (at least the competence that one would gain after completing MD studies), which is more noticeable in this field than others (maybe), and not with the utility of having credentials per se. However, I like to thing that most often than not, the media do reach out to the MDs which also actively participate in research (scientists?).

      • “the strategy of “dismissing credentials” is widely spurred within pseudoscience to criticize scientific knowledge and liken experts (at least “normative”, degree-holding experts) to proponents of opinions instead of messengers of verifiable information”

        To the extent that experts **are** actually offering opinions vs offering factual information, the value of expertise is debatable, since under such conditions it has proven either useless or destructive. Unfortunately some “experts” like to blur the line between “verifiable information” and opinion in order to advance their position or views on a given issue. For example, when Krugman wags his favorite 50yr old study on min wage and claims it resolved the discussion.

        So its important that experts who understand the fact/opinion distinction keep the heat on their colleagues to keep that line sharp.

        Even if you have to be negative.

        • To the extent that experts are actually offering opinions vs offering factual information, the value of expertise is debatable, since under such conditions it has proven either useless or destructive.

          +1

        • Even if doctors are just as unprepared to evaluate statistical claims as your average green coffee bean peddlers, they might still possess an incentive to proceed with caution in reporting medical claims: they could lose their medical licenses if accreditation boards–staffed with actually knowledgeable professionals–decide that their pseudoscience is too far out of line. I’m not claiming that boards really do engage in this kind of disciplining consistently, but the threat of it could still give accredited doctors an extra reason to be careful that people like Duncan lack. And in turn this might justify special concern with regulating the marketing practices of non-doctors in particular.

        • jim says: “For example, when Krugman wags his favorite 50yr old study on min wage and claims it resolved the discussion.”

          The problem is, everything starts looking like opinion and nothing much can be done to demonstrate that an “opinion” is so well established as to become “fact”. I mean RA Fisher was complaining about NHST before most of us were born but by habit of false equivalence some people will dismiss Gelman or Greenland as simply p-value haters with no real evidence for their point of view.

          In the end we’re not that far from “A man hears what he wants to hear and disregards the rest”.

        • ‘In the end we’re not that far from “A man hears what he wants to hear and disregards the rest”. ‘

          Yeah, I share your concern but I don’t think it’s that bad. The min wage is still debatable. Vaccines aren’t. We really are billion year old carbon. But whether we have to get ourselves back to the garden is debatable. There is lots of noise, but real things are happening.

          We just have be as careful as possible to accurately assess what’s real and try to tamp down the noise. The more we reduce the noise, the less credible the “its all opinion” people become.

      • I think there’s a risk of “circling the wagons”, ie giving undue weight to what you call “normative” experts in areas where their expertise is only partially relevant (eg where science shades over into policy or ethics), as a response to pseudoscience, though.

        Pseudoscience is going to happen anyway, since there’s a demand. Opposing it shouldn’t justify worse science or exaggeration of certainty.

  2. While I am sympathetic to the idea that doctors don’t get enough training in evaluating medical evidence (or simply don’t give enough thought to it), it is an unfair exaggeration to equate us to naturopaths or to say we are essentially unqualified. Evaluating medical evidence is a real part of the formal and informal curriculum of medical training. It is tested on the official licensing exams. I have been critical of Dr. Oz and his platform’s promotion of supplements in particular. But that is no excuse for extreme cynicism.

    • That’s all fine and good, except when MDs speak out on research in blatant attempts to influence policy and get their conclusions disastrously wrong.
      For example, in the August issue of the journal for the American Academy of Pediatrics they published commentary by doctors Benjamin Lee and William Raszka, both from the Department of Pediatrics, Larner College of Medicine, University of Vermont. (https://pediatrics.aappublications.org/content/146/2/e2020004879)
      Lee and Raszka claim that because there are currently fewer pediatric cases of Covid19 in the general population, this inherently means pediatric patients are less likely to be able to spread the disease. Their article and synopsis was held up by school districts across the country as being “factual” and proof that reopening brick and mortar schools was a safe decision. Their erroneous conclusion put lives at risk.
      The authors fail to understand the rule of “correlation without causation.” Basically their conclusion (that children are not effective spreaders of Coronavirus) is not supported by the two studies they mention.
      The studies they mention state there is limited evidence of children being the index case in households and that normally its the adult that introduces it to the home. What those studies do not account for though is if the parents are keeping kids at home and isolated out of caution. If the kids arent going out into their communities as much as the adults then how can they catch and spread the virus at home?
      How two physicians failed to understand such a simple concept is beyond me.
      In a larger study (n-10,962) just published from South Korea, they found pediatric patients are just as likely as adults to spread the virus.
      https://wwwnc.cdc.gov/eid/article/26/10/20-1315_article
      “We showed that household transmission of SARS-CoV-2 was high if the index patient was 10–19 years of age. ”
      They found children under 10 were less likely to spread the disease, but again that was because younger children were more likely to be kept at home and away from social contact with others, and therefore less likely to catch it.

  3. Most doctors have undergrad degrees in other subjects, typically biology + something. Commonly that would include at least basic statistics. That should give them the basic research tools.

    Credibility: the old saying “forget everything you learned in school. The real world doesn’t care.” Is still pretty much true. OZ doesn’t need to *be* a doctor. He doesn’t need any of the knowledge or skills of a doctor. He just needs the social panache that the title provides. Andrew might say the title confers “credibility” but I say not. No one cares about his credibility. Just his social standing.

    • I don’t know, before I got interested in stats I had to take an undergad stat course. We learned basic ideas of probability, and hypothesis tests (many variations of t-tests). We did not discuss power, Type I error, study design, or anything you would need to evaluate the methodology used for a given analysis. I don’t think an undegrad stats course at most universities does much more than check of a requirement for graduation.

      As another example I sometimes advice med students at my university on research projects they have to do. Everyone I have advised has been very uncomfortable with statistics. Many MDs I know who want to do research end up doing a masters degree in clinical research where they learn the necessary skills.

      • Neal said,
        “I don’t think an undegrad stats course at most universities does much more than check of a requirement for graduation.”

        I agree. In my experience, a lot of “stats 101” textbooks teach misunderstanding more than understanding.

        • *sigh*. The real world is tragic. Apparently I did my undergrad in OZ. The only non-science degree was technical writing, and even tech writers had to take one semester of calculus and one of physics.

          My nephew got a B.S. in Business, and the only math class he had to take was “business statistics”!!

          We have to stop degree creep.

    • Ezekiel:

      Indeed, I have no idea what’s being taught in medical school or residency, or what’s required to be a practicing physician. I think doctors know a lot, but I don’t have a clear sense of what they’re taught or how this is taught. I don’t think they’re taught much statistics, which is fine—statistics is hard, and time is limited. This is not a slam on doctors.

      I don’t want “not a real doctor” to be a disqualification for talking about medical research, anymore than I’d want “not a real plumber” to be a disqualification for talking about water resources. Imagine, for example, that someone gave a contrarian take on the Flint water supply. If that take is wrong, I’d like to see it criticized on its merits. I wouldn’t want someone to think they they’re shooting it down by saying, “This guy’s not even a real plumber.”

    • From my experience working with clinicians from 1986 to 2007 I was regularly stunned with how little they understood about research. On the other hand, many were very bright, learned a lot and fairly quickly became experts. But that was never right out of a medical school even for masters and Phds.

      One way my director (at one time Chief of Medicine) dealt with this was to make it a requirement for any clinician who wanted to work with me that one of our group’s clinician experienced in research would need to be involved.

      On the other hand, right out of a statistics program, both masters and Phds very few I have worked understand much about research. Statistics is hard and research is hard and it takes time after graduating to get some combination of both.

    • I did 1st year of med school before deciding I didn’t want to practice medicine, and now my job as a statistician is to help PI’s (mainly attending level physicians), fellows, and residents on their research projects (particularly the stats). In my experience, they did not teach statistics in med school other than a brief lecture aimed at prepping for the few stat questions on the all important USMLE Step 1 exam. The MD’s that I have worked with know very little about evaluating research, designing a study, or stats. This seems to be the norm. This doesn’t mean they aren’t smart, it means they weren’t taught.

      In addition, I would say that even a masters or PhD in stats right out of school might not have been taught the skills either. I would say that until I invested a lot of my own time reading blogs like this one and Data Colada and others, I would not have the skills either. I would also say that I’m still learning and wouldn’t be as sharp as many readers on this blog at evaluating research claims. IMHO these things are hard and take some real-world practice.

  4. I think that the idea of doctors being regulated professionals is the more important factor in this case. If a doctor disseminates harmful ideas they have a possibility of losing their license. If a guy with a marketing degree disseminates harmful ideas they can just disappear and find another job elsewhere. I don’t trust any individual to interpret research correctly, but if someone is willing to risk their career to promote something it would suggest that they have vetted the research with a variety of actual experts.

    • Many, many doctors contribute to a harmful and expensive culture of over testing because they don’t understand the background rate fallacy or the phenomenon of noise chasing. Always, “This next test may give us the answer.” Many patients recognize this.

  5. Decades ago when I last taught statistics, one of the first things mentioned in the class was: “In order to do things properly, it is necessary that the statisticians work with ‘subject-matter experts’.” The implication being that knowledge of the territory and expertise in statistics were unlikely to reside in one individual or one individual discipline. Putting on my subject-matter expert hat, I should point out that the cited article by Abby Phillip dates from January 28, 2015, and she left the Washington Post in October, 2017 to join CNN. Much has happened since early 2015, or so I am informed. Instead of green coffee bean extract, we have graduated to “America’s Frontline Doctors” with their miracle cure of bleach.

  6. If I heard someone say he was a “naturopathic doctor” on Dr. Oz, I would assume that he was some sort of charlatan. He didn’t say “physician” or “licensed physician” or “MD”. Saying “naturopathic” means some sort of rejection of mainstream modern medicine, so such a person would not necessarily have the endorsement of mainstream modern medicine credentials.

    • I fully agree.
      But you have to understand that the people who are willing to listen to “naturopathic doctor” in the first place are also likely to believe a MD degree confers an outsize credibility.

  7. A comment on the link to the Orbita study — Toward the end it says,

    “The larger question has to be about balancing the long-term benefits of stents with risks of the operation. It does not seem reasonable for a person to risk life and health by submitting to a surgical procedure just for a potential benefit of 21 seconds of exercise time on a standardized treadmill test—or even a hypothesized larger benefit of 50 seconds, which would still only represent a 10% improvement for an average patient in this study. Yet maybe a 5-10% increase is consequential in this case as it could improve quality of life for a patient outside of this artificial setting. Perhaps this small gain in exercise time is associated with the need for less medications, fewer functional limitations or greater mobility. If so, however, one might postulate this gain would have been apparent in assessments of angina burden, and it was not.
    Part of the bigger concern here is that these patients were already doing pretty well on medications—that is, they already had a low symptom frequency before stenting. For example, angina frequency as measured by the Seattle Angina Questionnaire was 63.2 after optimizing medications and before stenting in the treatment group. This roughly translates as “monthly” angina (John Spertus, personal communication). How does a study with a follow-up of just 6 weeks expect to improve an outcome that happens this infrequently? In fact, one of the great debates surrounding ORBITA is that those who discount the trial suggest it enrolled patients who typically do not receive stents in routine practice. Those who believe ORBITA is a game-changer argue that these less symptomatic patients actually make up a large proportion of those receiving stents— and that is why we have such a large problem with their overuse.
    Finally, are stents really being given to patients with stable angina just to improve fitness or to reduce symptoms? Or is there a continued expectation that stents have long-term benefits for patients, despite earlier data from studies like the Clinical Outcomes Utilizing Revascularization and Aggressive Drug Evaluation (COURAGE) study (Boden, 2007)? This would seem to be the key question, in which case the short-term effects, or lack thereof, found in the ORBITA study are largely irrelevant. Other larger trials, such as International Study of Comparative Health Effectiveness With Medical and Invasive Approaches (ISCHEMIA, see: https://clinicaltrials.gov/ct2/show/NCT01471522)”

    My only experience with someone getting a stent was a friend who had a one inserted after he was diagnosed with cognitive heart failure. His health insurance included paying for an exercise rehab class for patients who had received stents. The combination of the stent and the rehab class worked wonders for him. I doubt that the stent alone would have. So a clinical trial involving a stent without rehab class sounds not very well-thought-out to me. (Yes, we are talking about a different diagnosis to begin with, but it seems to me that it is stent-plus-rehab-class that ought to be studied, not just stent. Or possibly a three-group trial (one with no intervention, one with just stent, one with stent plus rehab class), or maybe even a four-group trial (adding rehab class only — although I could see an ethical argument against adding the fourth group.)

  8. I’m a medical student and evidence based medicine is part of the curriculum, it covers type 1 error, power and the like. I wouldn’t be surprised if a lot of doctors forgot that information, it can be easier to rely on professional society recommendations, but at least in school we’re taught that part of treating a patient involves looking at the literature to decide on the best treatment. A lot more of that was on defining the question, but it was part of my education. It wouldn’t be enough to make me call myself a research scientist. I’m working on a PhD for that, but it’s supposed to be enough that if I’m presented in the clinic with a question I don’t know the answer to I’m supposed to be able to make a solid attempt at evaluating what clinical literature there is on it.

      • Most residents spend a considerable amount of time researching and publishing data for management of diseases and case studies. So much so that we rarely move outside of our circle of MD/DO/PhD peer reviewed studies. I, for one, would never be exposed to naturopathic studies as we already consider it “nutrition advice” that is never been tested.

        • Unfortunately most residents spend time “researching” without actually having appropriate training or supervision. (My MD and PhD don’t make me an authority on this topic at all, but I do see quite a lot of resident, and clinical faculty, “research”.)

          As Anonymous said, most of the time medical students get taught about type 1 errors, sensitivity, specificity, and possibly odds ratios. But I have NEVER had a single medical student, resident, or fellow in my years of practice give me the correct definition of a p-value. Not once. Instead, they’re guided by faculty who believe that a sample size of 20 in a poorly defined population with a noisy measure and a t-test can give a valid answer. And then when the resident presents their research to the department and they get congratulated for their work or insight (rather than being ripped to shreds along with their mentor for chasing noise), doctors graduate thinking that they do understand research.

          So I truly think that Andrew has it right that statistical teaching in medical school and residency is part of the problem, not the solution.

      • So I mentioned power, type 1 error and the like, because that was what was listed in an earlier comment. My undergraduate degree was in engineering focusing on neuro which covers some AI stuff, and I’m continuing that thread in my PhD. I’m working on an MD/PhD I’m well aware there are more advanced statistical methods and some of the critiques of over fitting and p-values, but I think it’s dangerously short sighted to discount everything an MD has to say on an active research issue in clinical medicine just because you have “no idea what’s being taught in medical school and residency.” That would be as ridiculous as my discounting the paper you just posted because your field is political and social science, not clinical medicine. When I go to journal clubs with either MD’s or PhDs they tend to emphasize different things, both input is valuable. I could just as easily argue what you’re dismissing as “relevant background information” as just as absolutely essential to evaluating clinical medicine as anything you want to list with statistics. To assume an MD would be required to have a say or contribute to clinical research would be as you say silly, but to assume an MD wouldn’t give you the background to have anything meaningful to add to a topic in clinical medicine is also silly. There’s a reason most clinical trials require an MD on board, they are trained to have valuable input. Whether or not every individual doctor does is a different point, there are “bad doctors” and “bad scientists.”

        Also for someone that claims they want ideas evaluated on merit not background, you haven’t actually responded to my point. To be a good doctor and stay up to date on medical practices and what’s best for patients, doctors should be trained to evaluate clinical literature. It can’t “just be left to the scientists” because scientists aren’t trained about every meaningful thing that happens in the clinic or human body, and even if they were doctors would still need to be trained on recognizing what’s relevant to their patients. Maybe that should include a stronger foundation in statistics, that’s why medical curriculum is supposed to change when there are advances in the field, but you aren’t actually arguing anything here, your response is just dismissive. I’m also serious here, seriously disappointed.

        • Anon:

          You write, “it’s dangerously short sighted to discount everything an MD has to say on an active research issue in clinical medicine.”

          I’m not “discounting everything an MD has to say on an active research issue in clinical medicine.” Not at all!

          The purpose of my post is to counter the “not a real doctor” statement, which is when people discount what non-MD’s are saying, on the grounds that they are not doctors.

          Doctors have a huge amount to offer in evaluating clinical research. Of course. Look at my P.P.S.

          If you’re seriously disappointed, the good news is that what you are disappointed in is something I never said!

  9. I think Vinay Prasad’s new book; Malignant looks like a tour de force. I doubt that anyone but a doctor could have written. He happens to be a stellar researcher too.

    What it comes down that some subsets of the population are superlative thinkers, regardless of their education and training. The tacit learning is implicated.

  10. 1) Medical doctors are trained in statistics.
    2) Medical doctors review and critique articles during their training
    3) Doctors are the student who were beating other students in classes with curves and were capable of A’cing the most difficult hard science classes, much more so then students of other professions. In fact many students who wanted to enter medicine but couldn’t perform as well entered other professions and got PhD’s.
    4) Who do you think you are that you can determine what the strengths and weaknesses of doctors.

    • AU:

      Statistics is hard. A doctor can be a good student (your point 3) and have some training (your points 1 and 2) but that’s still not a lot of training or experience. That’s fine. Doctors have expertise in treating patients. They spend some time learning statistics and how to evaluate research, but that’s not enough. As I wrote above, an M.D. is neither a necessary or a sufficient condition for evaluating research claims. The same goes with a Ph.D. in statistics or anything else.

      To put it another way, lots and lots of people who are not doctors were good students and have some training in statistics and evaluating research articles. The problem with the “not a real doctor” trope is that it implies that a non-doctor with such training does not count.

      As for your point 4, again, I wrote that an M.D. is neither a necessary or a sufficient condition for evaluating research claims. The same goes with a Ph.D. in statistics or anything else. This is not a determination of the strengths and weaknesses of individual doctors, or Ph.D.’s, or anyone else.

      • Nah, stastics was a joke and still is. I took 300-400 level undergraduate statistics classes and they were easy A’s. Same thing in medical school. It’s all about understanding data and relationships. Plus, as a physician everything I do is backed by peer reviewed statistically significant literature. If I don’t practice that style of medicine I’ll lose my license. Physicians are far more qualified to understand data and 99.9% of the population.

        Also, a “Naturopath” is not a physician. It’s a made up degree full of bologna and a waste of money.

      • I’m guessing that A U meant that 1-4 are the viewpoints of a substantial fraction of the population. I find it plausible for a variant of those impressions to exist in many countries. Anecdotally, medicine, (bio)engineering and law were the good choices for students to list when I finished high school. This was quietly supported by teachers, parents and official annual statistics on wages.

        I would want it to be very easy to argue that a person who spends years studying a research degree, followed by years working as a researcher, is more likely to be good at research than a person who studied medicine and spends 90% of their time doing things that are not research or even related to it. If a PhD takes 4 years after a masters degree and we presume a M.D. gathers research experience at 10% the rate of a PHD student, then they cannot catch up to a freshly minted PhD until retirement age and will never catch up to a researcher who continues to spend 10% or more of their time on research.

        Something along those lines should hold for the average M.D. versus the average PhD but it will not hold for the +5 sigma M.D. versus the bottom tier PhD. Plus, I don’t think statisticians outnumber medical doctors. If the number of people in the tails differ and reporting focuses on success then this may actually encourage the perceptions mentioned above.

  11. I think unfortunately a lot of medical doctors, by virtue of their overall training path being so challenging, do come out of medical school with an inflated perception of their abilities in foundational areas like statistics.

    I’m not sure they realise that an awful lot of that foundational knowledge was rote learning that skipped past all sorts of exceptions and caveats, and was just designed to get them up to speed for their day-to-day work.

    This is why (IMO) so many medical doctors are prone to overreach when they are discussing big, complex medical and public health issues that hinge on a sophisticated understanding of statistics.

    Obviously this isn’t unique to MDs, but I think they’re particularly difficult to challenge on such questions, because their credentials carry so much weight.

      • There arrogant (and modest) professional in all professions.

        The one I had the biggest problem with were bio-physicists but fairly sure it was just the particular ones I was working with.

        Also, I did work with https://en.wikipedia.org/wiki/Alvan_Feinstein and a number of his students. Alvan encouraged them to think things through for themselves maybe to the point of ignoring “snotty nosed” statisticians. But with time I enjoyed working with his students for a number of years. (Now, I had no idea about Alvan’s tobacco funding or many of his other activities at the time.)

      • Great question, I do think that although there are modest individuals in each of these groups the average and certainly the 80th percentile are very arrogant.

        What would most interest me, though, is which of these groups is most _undeservedly_ arrogant. I think the average physicist (that would be me, approximately) does know more about a lot of stuff than the average person. The interesting question, to me at least, is whether I think I know more than I actually do, which is presumably subject-specific. And then apply the same question to surgeons and economists. It’ll be a close race.

  12. This is a bizarre post. Lindsey Duncan did not misinterpret or incorrectly evaluate medical research. Faulty statistical analysis did not cause him to peddle supplements using fraudulent claims and credentials. Duncan is a con-artist and a liar, not some true-believer who read a bunch of journal articles he didn’t understand.

    • Sam:

      That’s all fine. My point is that, for these statements, it’s entirely irrelevant whether Duncan is “a real doctor.” Those Surgisphere people were “real doctors” . . . so what? Those Theranos people were not “real doctors” . . . so what? The relevant question is what is being done and what are people claiming, not whether they are “real doctors.”

      • This reminds me of economists accusing each other of being “non-economists.” For insiders, publicly expelling those who would embarrass your credentials is a way to protect your credibility. For outsiders, dissociating charlatans from the credentials allows you to keep trusting the credentials rather than evaluating specific claims critically–the problem of evaluating the truth of a particular statement can be reduced to the problem of evaluating whether or not the speaker’s credentials are real.

        Seems like a lot of M.D.s in here feel under attack too.

  13. I’m a doctor of the medical variety. Epidemiology, biostatistics, and research analysis are taught in both medical school and residency. It’s about 10% of the boards, if I recall correctly. We are trained in this specifically to avoid being misled by bad research, as a lot of our day-to-day practice actually relies on reading research studies to try and figure out how we will handle the odd case of the week or whether to prescribe a new drug that has hit the market. In my residency we had an hour each week of time dedicated to article review and analysis, and that hour required a lot of weekly preparation outside of the classroom. Being a doctor does give us a good deal of ability with regard to evaluating medical research, but being a doctor does not mean a person is incapable of being either incorrect or being a grifter. There’s plenty of corrupt individuals across the board.

    • > We are trained in this specifically to avoid being misled by bad research,

      Unfortunately, the cuttent conception of what qualifies as good and bad reseseach

      • [button pressed by mistake]

        > We are trained in this specifically to avoid being misled by bad research,

        Unfortunately, the current conception of what qualifies as good and bad research is exactly what gave us a replication crisis and a bunch of other problems.

  14. So, what are some necessary or sufficient conditions? Maybe you could do a entire years-long blog about this subject.

    Of course, no human will ever satisfy truly sufficient qualifications. We could maybe enumerate certain necessary credentials. One might imagine that publishing papers that misinterpret statistical significance, ignore “garden of forking paths” or p-hacking concerns, or casually elide causation and correlation would be positive evidence against the qualifications of the author. But we don’t seem to be close to that yet.

    More interesting is to say what is necessary and sufficient to any particular analysis. From the outside, there seeems to be no bright line between serious objections and a frivolous ones, or a serious conclusions and a frivolous ones. Even our basic tools to guard against uncharacteristic random draws are undermined by our publication, promotion and funding system that tend to bring those to the surface.

    • Things I think are necessary to DO good quantitative science in the modern world (with for all of these add “or equivalent experience”)

      a class in basic programming
      a class in data structures and algorithms
      a class in software engineering principles
      A semester course in database management and SQL
      calculus through multivariable calc
      combinatorics and basic probability theory
      an advanced probability theory class
      3 semesters of courses involving analyzing real data (could be stats courses or application area courses but should be project oriented)
      a course in scientific or technical writing
      2 years of advanced courses in your application areas (physics, chem, bio, econ, psych, engineering, ecology, whatever…)

      I think this makes it clear that unless they were a math, CS, or science major with a CS minor or in a data science major or similar, a medical student is unlikely to have had even 1/4 of this qualification.

      you could reduce this if you are just interested in evaluating research, but not that much. it’s too easy to be credulous if you haven’t seen what it takes to DO the research.

      • “you could reduce this if you are just interested in evaluating research, but not that much. it’s too easy to be credulous if you haven’t seen what it takes to DO the research.”

        +1

        Although possibly evaluating research can be taught to some reasonable degree by having students practice evaluating research — but students need to be given carefully thought out guidelines such as “Is the research design appropriate for the research question? Why or why not?” “Are the measures appropriate for the research question? why or why not?”, and the teacher needs to emphasize the ‘Why or why not?”

        • What do you think about my suggested background though? Would you add or remove things?

          I think it should include something on dynamical systems. Whether it’s an ODE course, or a simulation/computation based course, or a mixture. Modeling things through time by systems thinking (conservation laws, flows, decision making, rules based/agent based models etc) is really important and under-appreciated I think.

        • Pragmatically speaking, I think that what I said above (students need to be given carefully thought out guidelines such as “Is the research design appropriate for the research question? Why or why not?” “Are the measures appropriate for the research question? why or why not?”, and the teacher needs to emphasize the ‘Why or why not?”) is the first priority, since it is basic to all fields of research. In addition, the things you list need be included/required *as appropriate for the particular area of research that the students are being taught to evaluate*.

      • A while back, I was thinking much along the same lines as your list. Then I despaired, because I don’t see one could possibly have a university curriculum designed around that model. Granted, you could add MA and Ph.D, but while the years might work out, I don’t see how it could possibly work in a practical sense (how would people get accepted into the MA programmes in the first place: all the ”application” areas would have to give up the BA (and I doubt two years of advanced courses is enough). Have you thought about practical implementation?

  15. Epidemiology is not emphasized in medical school, although it is taught. The best degree for that is an Master’s or Doctorate in Pubmic Health.

  16. Out of the dozens of my extended family members and friends who are medical doctors, almost none of them would be able now even to do high-school mathematics. That’s just reality.

    And now we’re in a pandemic, there are constant media quotes of medical doctors that are not only false but impossible. e.g. one claimed that 90% of COVID cases must have been detected, which, because the false negative rate of tests is at least 30%, is impossible even if you test the entire country’s population once.

      • It’s good to realize that you need to review. But it’s also important to think about the quality of what you review. In particular, many high school (and college) math textbooks focus on “rote learning”, at the expense of focusing on developing thinking skills. The latter are, in the long run, the most important skills to have.

Leave a Reply to Martha (Smith) Cancel reply

Your email address will not be published. Required fields are marked *