“The psychologists are getting a hard time for doing what they do, whereas people doing real harm to society are happily roaming around like free range chicken”

Shravan Vasishth writes:

At least people like Amy Cuddy are just doing bullshit research that’s harmless (after all, raising your arms up high before an interview is unlikely to hurt society much). But check out this MIT “Professor” explaining the “statistically significant” autism-vaccine “connection”:

She even takes a notorious, fraudulent, and retracted Lancet article as her starting point. The author of that article has been banned from practising medicine in the UK.

I think that the psychologists are getting a hard time for doing what they do, whereas people doing real harm to society are happily roaming around like free range chicken. How come people like her are not being put out of business?

I replied that a “Senior Research Scientist at the MIT Computer Science and Artificial Intelligence Laboratory” is certainly not a professor of biology! I actually think that legitimate news organizations know this sort of thing, hence you’ll only see this sort of person on fringe websites (such as what Shravan linked to above) or hack outlets such as Daily Mail or whatever.

And Shravan wrote:

She also produces a great quote in the interview:

I mean, I’m a computer scientist, so I can go and look at . . . what’s the statistical distributions, and what not.

And what not indeed.

I strongly feel that people like her cause so much harm that they need to be fired, they are on the same level as that Dutch fraudster. How can she open her interview by mentioning that retracted Lancet study by Andrew Wakefield. If she is not aware that that paper has no credibility, she cannot work as a research scientist because she lacks basic research skills. If she knew it was retracted and mentioned it anyway as a credible study, that is fraud and she has to pay a price for that.

On NPR you hear these impassioned mothers calling into SciFri and confidently telling the doctor being interviewed that they will never vaccinate their kids because of all the toxic chemicals in the vaccine. So now we have a new generation of children paying a heavy price for the kind of ignorant misuse of statistics that people like this “research scientist” propagate.

I find it hard to get angry at such people because they’re so clueless, they just don’t know any better. In contrast, many of the people doing bad research who I criticize, I feel like, sure, they’re not the world’s most competent scientists, but they should be able to take criticism that is ultimately technical and incorporate it into their research.

I guess what I’m saying is: if you take someone like this MIT scientist, OK, she’s just a crank, offering opinions on important topics on which she has no expertise. I can’t get mad at her for being a crank, that’s just who she is. I guess I can get mad at her for misleading people about her expertise. But somehow I get more irritated by people who do things wrong in their own field.

Regarding Shravan’s more general point that health misinformation is more important than social-psychology misinformation: Sure, after all these are kids’ lives we’re talking about. But I do think these social psychology studies make a difference too, in that they feed the idea that people are shallow and capricious, that we make all sorts of decisions based on our animal instincts etc. Sure, some of that is true but not to the extent claimed by those clueless researchers.

To erroneously connect fat arms or monthly cycles to political attitudes is to trivialize political attitudes, and I think that’s a mistake, whatever your politics.

50 thoughts on ““The psychologists are getting a hard time for doing what they do, whereas people doing real harm to society are happily roaming around like free range chicken”

  1. This post reminds me of Hanlon’s razor “Never attribute to malice that which is adequately explained by stupidity”, and the Grey’s Law response “Sufficiently advanced incompetence is indistinguishable from malice”.

    Interesting to see these different responses to ignorance/cluelessness between the two and different perspectives on when we should get really upset about it. On the one hand, which persons should “have known better” most (e.g., are some people incapable of knowing better; and people that are told they are wrong seem more malicious when they keep displaying ignorance?), and on the other hand which persons do the most damage with their cluelessness.

  2. I think Shravan has a very valid point: The damage potential of scaring people away from vaccines is orders of magnitude higher than telling them that fertile women wear read.

    Perhaps social psychology makes a difference, but it’d be hard to claim that its impact is anywhere close to medicine. That’s why I would love Andrew to critique more substantive studies than the silly studies on ESP, red-fertility, girl names, fat arms etc.

    • Rahul:

      I think my most important contributions are in statistical methods. I would be happy for people to read the work I do regarding application areas that I understand in social science, and then apply these methods of mine to problems in medicine. I’m guessing that this approach will be more effective than me trying to become a medical expert.

    • Indeed, a purely consequentialist assessment favors Vasishth’s perspective over AG’s. The irony here is that one of the animal instincts inherent to humans is that we hate cheaters, so much so that straightforward consequentialist reasoning goes out the window when cheating is in the air. (I wanted to call this tendency “capricious and shallow” to call back to the OP, but in all honesty there are good game-theoretic reasons not to tolerate cheaters…)

  3. Both deserve a hard time! The MIT “scientist” is aiming her nonsense at people who don’t have a lot of time or skills to evaluate scientific arguments. In a way I find this more loathsome than the “fat arms” people, who are mainly scamming fellow academics. (Sure, their stuff gets written up in the popular press, but it’s aimed at getting accolades in academia, while the MIT person is scamming gullible parents.)

  4. Having worked in clinical research, I do think there is a tendency to over-rate importance of research there compared to elsewhere.

    Yes we all get sick and die but we all also like to eat, have a safe/comfortable environment, have fun, etc., etc.

    • But we are not talking about research here we are talking about an outright loon spreading dangerous misinformation, well total lies, that unfortunately people are likely to believe.

      Readers here likely tend to think that people have the ability to evaluate research. Often they do not or will not.

      There are a lot of people spreading lies, recommending insane treatments (bleach enema anyone?) and telling everyone that the CDC and anyone else who recommends vaccines are just in the pay of big pharma.

      Google “Dangers of Vaccines” and see what comes up.

      Spain has had its first case of diphtheria in 29 years, and more children testing positive due to someone believing anti-vaxers like Saneff, the woman Shravan is talking about. http://www.thelocal.es/20150608/eight-more-children-infected-with-diphtheria

      She and people like her are wildly deluded and very dangerous.

      Also, Andrew mentions the Daily Mail. How many people read it as opposed to Nature?

      • jrkrideau: I actually used to work in a centre for vaccine evaluation so I am aware of vaccine issues.

        But I am going to agree with Shravan below about the problem being a first most a wide spread lack of understanding of scientific method and a gullibility to believe statistical methods (especially ones that seem very complicated) provide certainty.

        But I don’t agree with him that Andrew should do all the work in the world starting with areas Shravan is currently most interested in. We all have to choose how and where to spend our time wisely.

        As for that MIT professor, I am not going to spend my time actually looking at what they did – but it is legitimate to try to learn from vaccine reports – but its very very difficult not to be mislead by them. The “cure” is better design including other sources of information, better analysis and a more wide grasp of that gives us – especially the remaining uncertainty.

        • @Keith

          I think it is unfair to characterize this as Shravan asking “Andrew to work in areas Shravan was currently most interested in. “

          It’s more a question of perceived relative importance of subject areas. e.g. I remember Andrew posting something to the effect that he was sad about one of the smart Math Olympiad guys Andrew was friends with was now doing Wall Street Finance.

          In a similar vein, critiquing statistics in Medicine does seem more worthy of Andrew’s talents than silly fertility-color correlations.

        • Hi Keith, I don’t know how you got the idea that I suggested that “Andrew should do all the work in the world starting with areas [I am] currently most interested in” :). I was only pointing out that there are qualitatively similarly superficial studies out there that have immediate impact on people’s lives, as in lives lost. A look at these (perhaps now and then) would do no harm, and probably do a lot of good.

          PS My current interest in the study of language comprehension processes. ;)

        • OK I did exaggerate – but I do think we all need to decide how we spend our time and so I don’t like the judgement “would do no harm, and probably do a lot of good” or more strongly “critiquing statistics in Medicine does seem more worthy of Andrew’s talents”.

        • Keith, you may not like that judgement but I do. I guess we are entitled to our differing judgements here on this Bayesian blog, right? :)

          I am going to put my money where my mouth is and try to find a future Master’s student to reproduce the Seneff “result” using the methods she describes. She’s essentially doing computational linguistics here. I mean, I can also claim that “I’m a computer scientist, so I can go and look at . . . what’s the statistical distributions, and what not”.

          I am sure an undergrad or grad student in a good compling program can easily do a more sophisticated job than her Entropy article (just the statistical analysis and methods used, I mean). Quite apart from the misinterpretations of her analyses in her paper, I’m curious to see what the statistical models will tell us once we redo her work in a statistically more defensible manner.

          Also on my list is to deal one day with that guy in economics who thinks that case marking or some such thing in a language makes the speakers of that language fiscally more responsible and such like things.

        • > Keith, you may not like that judgement but I do
          I did not mean that but rather all we all need to use are judgement and I did see problems when judgments differ especially when the involve what people choose to spend their (very limited) time on!

          From what time others have spent looking into Seneff’s work – it does look very poor indeed.

          (Unfortunately this happens in every discipline to varying degrees and apart from lack of access to data and methods its easy to find.)

          > She’s essentially doing computational linguistics here.
          Does look like that.

          > language makes the speakers of that language
          That was the topic of my undergrad philosophy course in first year – one study I looked purported to confirm this in an group of children – but when I read it they were mis-interpreting the results – so iI was lucky and learned that many studies are wrong early in my career.

        • I did not mean that but rather all we all need to use _our_ judgement and I _don’t_ see problems when judgments differ especially when the involve what people choose to spend their (very limited) time on!

          Sorry, this was rushed.

        • “I am going to put my money where my mouth is and try to find a future Master’s student to reproduce the Seneff “result” using the methods she describes. She’s essentially doing computational linguistics here.”

          Based on my experience of both peer-reviewed and non-peer-reviewed papers I’d suggest assuming that all of the Saneff et al (2012) references should be assumed to be dubious, wrongly quoted, misunderstood or not actually read.

          Ditto for generalizations such as we find in the Intro: Statements such as ” While it has
          been suggested that the observed increase in [autism} rates may be due mainly to a change in diagnosis criteria, the actual criteria have changed very little from 1943 to DSM-IV-TR” may be true; I am not really up on DSM classifications, but it sounds wrong.

          It should be an interesting study.

    • Agreed but I don’t see how the silly research about ESP, red-fertility, girl names, fat arms etc is helping me in the latter set of your goals either.

      Perhaps if we were arguing over funding for a clinical trial versus food-science research into creamier ice-cream I might be convinced.

      • Rahul:

        My examinations of the studies of beauty and sex ratios, fat arms, etc etc were helpful in the development of statistical ideas regarding prior information, type S and type M errors, the garden of forking paths, and other ideas with more general application. This is a paradigmatic path for statistics research, that we work in depth on particular problems to develop general methods. Similarly, Fisher and Yates worked on agricultural experiments but their methods were more broadly useful, Tukey in his EDA book had some particularly pointless examples (I recall one of the January temperature in Yuma, Nevada) but which again allowed him to develop new ideas. I am just one person and I hope and believe that others will take the ideas I’ve developed and illustrated in specific cases and apply them in other problems.

        • Andrew:

          Sorry, I wasn’t criticizing your examination of the studies. I was criticizing the underlying studies themselves.

          Agricultural experiments are useful. A red=fertility study is hardly so. In my opinion.

  5. I get mad at the media who give a platform to charlatans. Major news organizations have the resources and expertise to distinguish people who know what they are talking about (even when their views are a bit unconventional) from cranks. Or if they don’t, they shouldn’t be in the business of science reporting.

    Yes, in the US, cranks have a first amendment right to spout their crap, but there is no obligation on the part of media to give them an audience.

    There is another adverse effect of the vaccine charlatans. Some states have mandated universal uptake of all recommended vaccines recommended by the CDC’s Advisory Committee on Immunization Practices (ACIP) as a condition of school enrollment. There are legitimate reasons for disagreeing with some of ACIP’s recommendations. For example, some of the diseases covered (e.g. hepatitis B) are not transmitted in school or by activities that are particularly common among school-age children. Those recommendations have a larger public health purpose, but one can question coercing adherence to those purposes through threat of being barred from school. But people who have serious reservations about some ACIP recommendations usually will not speak publicly about this for fear of being confused with or appearing to encourage the cranks who scare people away from the measles vaccine with demonstrably false claims of autism, or from immunization generally with false claims of toxin contamination.

    • Hepatitis B is easily transmitted in the activities of school-age children! It’s transmitted by blood (skinned knees) and there is some evidence that it’s even transmitted by saliva. It seems to be much more contagious than other other blood-borne illnesses, perhaps related to the viral load and viability. (HIV can also present in saliva but due to the low viral load and viability in that environment, the risk of transmission via saliva is basically zero.)

      Hep B is also transmitted sexually, and much like the HPV, it’s much easier to vaccinate in childhood than to try to deal with the associated pathologies in adulthood.

    • What about the role of the Universities? The ones that harbor such damaging cranks. Do they try to rein them in?

      Doesn’t something like this damage the MIT / Columbia etc. name? Why don’t they fire the most egregious of these quacks who misuse their affiliation in unscientific ways?

      • Because it is not easy to know who the cranks are?

        It is just as easy, easier probably, for a bunch of wild-eyed nutters or mad religious fanatics to organize a firing campaign against an academic as it is to have other people with better reasons do it.

        I included a reference to Respectful Insolence above. http://scienceblogs.com/insolence/ The blogger there has had at least 2-3 attempts to get him fired from his university post.

        Or have a look at the attacks by Cuccinelli (sp?) againt Michael Mann in the climate wars.

        • Fine, but then the media (see Clyde’s post above) has the same defense, does it not? “How do we tell who the cranks are!”

        • No it does not.

          It actually is fairly easy to tell who is talking nonsense in many cases and there is no need to give the ‘obvious’ cranks a lot of airtime. Occasionally the “crank” may be right but not usually when they fly in the face of existing research and theory.

          All one would need to do as a member of the media is point out that Saneff’s hero Wakefield has been struck off for massive ethical and legal problems related to his research and ask for an explanation. The problem is letting the nutters rabbit on as if they are sane.

          There is no absolute requirement to give equal weight and respect to a nut case and a world expert which is what the media often does. Giving a Michael Mann and Lord Monckton equal billing is like letting Einstein debate physics with Bozo the Clown.

          Have a listen to this http://www.cbc.ca/radio/thecurrent/vaccinations-assisted-suicide-ruling-organized-crime-1.2952333/vaccinations-pro-and-anti-vaxxer-parents-make-their-cases-1.2952339 .

          The interviewer is not exactly friendly to the anti-vaxer but she, the anti-vaxer, is given a fair chance to explain her postion, based on her “research”.

          A university with a commitment to freedom of research and free speech is not in the same situation. Censuring or firing a faculty member even for idiotic statements is not the same and as I pointed out, it is too easy to subject a university or whatever to intense political lobbying by a fringe group for any of a number of reasons.

  6. But I do think these social psychology studies make a difference too, in that they feed the idea that people are shallow and capricious, that we make all sorts of decisions based on our animal instincts etc.

    The rational choice model which dominates economics and political science reduces all human behavior to a set of mechanistic calculations which are devoid of any moral component. The internalization of this model by practitioners in its field is demonstrated by Andrew being able, without irony or indeed, without any apparent self-reflection, to make a statement that social psychology promotes a paradigm that typifies individuals as shallow and capricious as if the fields he practices in do not. Incidentally, the above statement is false without qualification, as Festinger’s theory of cognitive dissonance can be seen as how a subjective Bayesian reconciles evidence (some support for this can be found in “A Bayesian Model of Conditioned Perception”, Alan A. Stocker* and Eero P. Simoncelli, Adv Neural Inf Process Syst. 2007; 2007: 1409–1416.). As a (mostly) frequentist, I tend to think of Bayesians as shallow and capricious, but I hold this opinion with fully developed sense of irony. In any case, the point is that if Bayesian coherence is isomorphic in some sense to cognitive dissonance, then adopting a Bayesian system of beliefs perforce makes an individual shallow and capricious, since this derives from a social psychological theory.

    It may be that Andrew is referring to “click-bait” psychology studies that has had attention drawn to them through the popular media (Bem’s ESP studies, etc), rather than more developed social psychological theories. I’ll just say that this same phenomenon exists in political science and economics, also (theories promoted for their novelty–remember chaos theory in economics?). This then becomes more a comment on the failure of the social sciences to adopt scientific standards of evidence, which is to be expected when the dominant figures in these fields are typically praised and promoted for their very failure to do so.

    I think And

    • The (strongest) foundation of Bayesian methods is Cox’s theorem. If you want to assess the reasonableness of different values for a given quantity and you want it to be consistent with pretty widely accepted requirements of logic and soforth… you have to use the math of Bayesian probability. There’s nothing in that to say that “adopting a Bayesian system of beliefs…derives from a social psychological theory”

      if I’ve misunderstood and the eliding above isn’t what you meant, I’m sorry, but I am confused how you think Bayesianism is somehow based on a social psych theory.

      • I don’t think Bayesianism is derived from a social psychology theory (Bayes predates what we think of as social psychology by centuries). I was making a transitive argument:

        the point is that if Bayesian coherence is isomorphic in some sense to cognitive dissonance, then adopting a Bayesian system of beliefs perforce makes an individual shallow and capricious, since this derives from a social psychological theory,

        in reference to Andrew’s comment about shallowness and capriciousness (I realize Andrew is a computational Bayesian rather than a true Bayesian, but still…). Now, as Andrew qualified in response to another comment below, he is only attacking certain type of social psychology experiments, ones that are apparently designed to display counter intuitive findings to the conventional wisdom with little regard for the standards of statistical and scientific inference (“extraordinary claims require extraordinary evidence”–for example, the existence of ESP). So my comment was a little snarky.

        What wasn’t so snarky (or at least was justifiable snark) was the comment on the underlying motivating assumptions of human behavior in economics and political science and me chiding Andrew for complaining about the sty in others eyes and ignoring the log in his own. Models based on these assumptions are simply false and false in a bad sense (defined later), not in the sense that all models are wrong but some are useful. In economics, there has been an almost deliberate effort to ignore empirical evidence to fit ever more complex mathematical models. In political science, the rational voter model completely misses what has been the underlying dynamic of presidential elections since 1964, that of race. Both these dominant paradigms of the rational actor/optimizing individual have contributed greatly to the economic/political morass this country is in.

  7. Having made a recent move into computational science (mainly HPC) and working with researchers across varying disciplines I’ve been made aware just how deep the issues run (those discussed on this blog and others). Tho it isn’t a psych thing exclusively. Sure Andrew acknowledges this but the focus on psych on this blog tends toward a minor witch-hunt (fair enough at some level, we know what we know and some things just push our buttons – for me it’s the authoritarian air of expertise displayed by medical trained researchers and the resulting chaos that ensues when they plan a research project). Is psych on the radar cos it’s Andrew’s bugbear or cos it’s genuinely in a worse state than other disciplines. I’d say to the contrary, it’s in a good place given its short history and lack of theoretical unification. It’s the sheer volume of psych research that makes it so easy to find the crud. imo psych academics typically have a deeper level of stats/method training then those in other disciplines. Yet the discipline is very conservative, there is no impetus to change methods. The raise (and hegemonic adherence to the status quo) of the issues is there’s no incentive to move to better practice. In this country (NZ) we have the PBRF (pub based research funding) which encourages quantity over quality. Picking apart individual research will serve a limited function in changing practice (ie calling ppl out tends not to be the best strat to change behaviour). Do we need a domain general methodology/best practice? Sometimes it seems to me the rise of ‘data science’ has marginalised those of us with discipline specific phd’s and general math/stat/phil of sci interests. Everyone now wants quick and easy methods to success through modern techniques. For the generalists among us perhaps that’s where we can help stop the rot, a cross-discipline methodology anyone?. Oh and moar philosophy of science for undergrads!!

      • for me it was a natural, circular type progression, from a 1980s basic/machine language programming computer nerd to math obsessed teen, to science obsessed adult, to phd obsessed (fool?), and back to computers. HPC is in an interesting place at the mo, a mix of the old and the new, FORTRAN is still King. but the future looks very bright and shiny. Also while I know it’s not fashionable in the high functioning, career driven, results obsessed academic world but I’m only interested in the science, I don’t want to do admin, teaching, conference hobnobbing, endless meetings, departmental politics etc. Just science. Working in a cross discipline high level problem solving environment is a dream come true.

  8. This MIT computer scientist talking about vaccines shows why researchers should stick to their area of expertise. She may honestly not know the Lancet study was retracted. Because she’s not an expert in vaccines, so likely doesn’t know the literature well.

    It would be like Andrew authoritatively commenting on brain surgery because he’s a leading statistician. The two have nothing in common and his comments are likely worse than useless. MIT should have grounds to at least censure this researcher for speaking outside her expertise and potentially bringing the institution into disrepute.

    • If you look at her website there are a number of presentations and “publications” listed. She has been at this for quite a while and knows the buzz words If she does not know about Wakefield it means that she is stone deaf, completely illiterate and possibly totally blind.

      You may not realise this but Andrew Wakefield is a persecuted martyr who is bravely fighting the good fight to protect children from the evils of vaccines especially the MMR (Measles Mumbs and Rubella) vaccine. Heck he has groupies.

      Just because he was shown to have accepted money to do ‘certain’ research, engaged in unethical research practices, had his major anti-vaccine paper retracted and was struck off is no reason to doubt his findings!

  9. As someone who was first trained in psychology I have to admit to a certain glee that you’ve turned your attention to problems in the field (not least because it confirms my own sense that something was amiss). But this…

    But I do think these social psychology studies make a difference too, in that they feed the idea that people are shallow and capricious, that we make all sorts of decisions based on our animal instincts etc.

    …sounds pretty extra-scientific to me. It’s reminiscent of backlash against other scientific frameworks, like evolution and heliocentrism, which in retrospect seem to have been grounded in peevish self-importance. I mean, don’t get me wrong, I don’t know of any findings in social psychology that have quite the same grounding as either of those frameworks! (And I was never a social psychologist, so I don’t have a dog in any of their fights.) All I’m saying is that you do yourself a disservice by pointing to such outcomes as the problem. On the contrary, bad research methods are a problem even if papers that use them present findings we like (as the infamous “Bruno” illustrates).

    I also have to say, despite my discontent when I was a psychology graduate student, I have spent time in other research cultures that felt *way* more like trawlers than my PhD lab did; and once I crossed over to statistics the multiplicity issue actually got very little attention. During my MS I did take a great class on reproducible research, but those methods, critical as they are, only really address part of the problem. These days I’m working in a culture that’s a little more disciplined than some, but I think it’s safe to say it’s only regulation that makes it so. All this to say that you’re free to feel more peevish about psychology than other fields, and in that you might have a pretty good cheering section, but I suspect the scope of the problem is not actually larger there than in many other fields. But as my psychology professors used to say, it’s an empirical question!

    • Erin:

      I didn’t write, “I think social psychology studies . . .”; I wrote, “I think these social psychology studies . . .” [emphasis added]. That is, I’m not trying to damn all of social psychology, just the particular studies we’ve been criticizing here.

      P.S. You get extra credit for referring to “Bruno.”

  10. Well, in my place or shall we say university that I attended, the academe seem to dwell on this so called “inclusive” approach to psychology. I don’t actually know how to expound on it but from the term itself, it hints about multi-disciplinary methods to solve psych related issues. Just my few cents.

  11. To follow up on numeric’s point above, not only do you see this in economics, but it’s not just the tabloids–you see plenty of otherwise-respectable media promoting garbage and charlatanism. Of course, particularly when some physicist decides to dabble, or when someone cries “neoliberalism” enough times.

  12. Andrew, you say that you don’t know enough about medicine to criticize studies in that area. Fair enough. But what are your credentials in psychology? :)

    In this MIT case and similar superficial “research”, it’s not as much about domain expertise (although that always helps) as much as about statistical understanding (rudimentary stuff, things even I know). At least in the cases we are discussing, it is possible to scrutinize the methodology of a study without being an expert in the area. For me the level of expertise needed to understand what this MIT person did is at the same level as in the red study. But the latter type takes a disproportionate amount of heat. Maybe you, or someone like you, can read the MIT person’s Entropy article (or whichever journal it came out in) and write a Slate article about it. I think this is the article:

    Stephanie Seneff, Robert M. Davidson, and Jingjing Liu, “Empirical Data Confirm Autism Symptoms Related to Aluminum and Acetaminophen Exposure,” Entropy 2012, 14, 2227-2253; doi:10.3390/e14112227

    Method used: “This paper investigates word frequency patterns in the U.S. CDC Vaccine Adverse Events Reporting System (VAERS) database”

    Andrew, just to get your goat, I’m going to quote the authors’ explanation for the statistical approach used:

    “We used an established method based on log likelihood ratio, as described in [99,100], which
    provides a p-value associated with the likelihood that the observed distribution bias of the word or
    phrase could have occurred by chance.”

    And here:

    “We consider it to be a validation of our methods that we detected these features with a statistically significant p-value.”

    • Shravan:

      I don’t have credentials in psychology but I have collaborated with psychology researchers. But many of the examples I’ve criticized are on topics that I know something about. The first paper I criticized in this way was the series on sex ratios, and I happened to have read widely in that area and I knew that the claimed effects were impossibly large. This is a bit of prior knowledge that I had but that the Freakonomics team (which uncritically promoted that work) did not have. Many of the other papers I’ve criticized involve political science topics such as voting and public opinion, and I am an expert on this. It’s true that I’m no expert on ESP, but I only wrote on that paper after it had received lots of publicity. And it turned out that the problems in that paper were huge and obvious. I agree that medical papers can be bad, and I’d be happy if my work were to inspire people to work to clean up that field.

    • Are there medical papers appearing in prominent, respectable journals e.g. Lancet, JAMA, BMJ etc. that approach the “wrongness” / “sillyness” level of the red=fertility paper etc.?

      Any examples? Would love to read.

      PS. What sort of journal is “Entropy”? Respectable?

      • Rahul, not quite “medical” papers, but take a look at any of Walter Willett’s recent slew papers from his nutritional epi group at Harvard. Here are just a couple (but looking at these tells you pretty much all you need to know as all of these papers are quite similar with essentially the same methods and essentially the same, and obvious, problems):

        http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3712342/

        http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4243376/

        These analyses (and all the rest) use the same 3 large cohorts of health professionals, and they all divide the exposure up into quintiles and then run proportional hazards models (adjusting for a bunch of *measured* stuff) for incidence of whatever outcome they happen to be studying that day. But look at Table 1 in any of these papers, and you’ll notice that all of the other obvious health indicators increase (or decrease) monotonically across these quintiles… that is, people classified into one extreme quintile are always *inherently* healthier than those in the other extreme quintile. Statistical models cannot control for these inherent differences, there is clearly gobs of unmeasured confounding, and there is no way to know which way the net confounding bias would go.

        Walter Willett is a hack who clearly has the potential to harm (or at least worry) a lot of people!

    • Shavran
      Entropy also looks a bit dicey. It’s made Beal’s list of dodgey publications. http://scholarlyoa.com/2014/02/18/chinese-publishner-mdpi-added-to-list-of-questionable-publishers/

      I like the revised photo moving MDPI’s office out of the bakery.

      Unfortunately I doubt if most reporters know Beal’s list exists and I am sure 99.9% of the general listening or reading public has no idea of the problems with fake/dubious etc journals.

      • Wow. She has published several times in this journal. I used to think Entsophy, I mean, Anonymous, was just being plain nasty when he puts down academics, but seeing an MIT research scientist resort to publishing in these kinds of journals makes me think he has a point. If only a handful of these people do this kind of incompetent “research”, it is no surprise that all academics come under suspicion.

        • Yes once or even twice is forgivable [1] but repeatedly is really dubious. However we are not talking about an academic here, we’re talking a about a fanatic.

          This is not a slightly dodgy junior academic doing a bit of CV padding, this is a true believer willing to do just about anything to warn the world of the “perils of the MMR vaccine”

          Unfortunately to the outside world this is not likely apparent. MIT is MIT to most people.

          1. I think Beal said even he published a paper in one of the MDPI journals before he realised what was happening

Leave a Reply to Mark Cancel reply

Your email address will not be published. Required fields are marked *