Criticism of bad research: More harm than good?

We’ve had some recent posts (here and here) about the research of Brian Wansink, a Cornell University business professor who’s found fame and fortune from doing empirical research on eating behaviors. It’s come out that four of his recent papers—all of them derived from a single experiment which Wansink himself described as a “failed study which had null results”—were hopelessly flawed. I don’t know anything about the quality of Wansink’s other published work, but given the low quality of four papers which he advertised on his own blog, I’m concerned.

Yesterday we discussed the problems that can arise when doing quantitative empirical work with noisy data in the absence of substantive theory. I argued that the statistics profession is partly to blame for the attitude that many researchers display when they think they can make routine discoveries while at the same time having no theory of what is going on.

Today I want to discuss a slightly different topic, an issue that comes up implicitly whenever we criticize published work.

On statistical or methodological grounds, of course it is appropriate to point out flaws: this is how we as individuals and as a community learn to do better science.

But what about the good of society? Just as some people criticized us for criticizing the work of Ted-talk star and retired Harvard professor Amy Cuddy, on the grounds that Cuddy’s speeches have inspired millions even if her science is not completely sound, similarly one might criticize our criticism of Wansink, on the grounds that his books send positive messages about healthy eating behavior, even if his science is not completely sound.

Is Brian Wansink’s research doing the world more harm than good?

I have no idea. I assuming that Wansink genuinely believes that he’s making discoveries and that he’s helping people in his books, videos, interviews, etc.—and the research papers are part of that, for two reasons. First, publication in scientific journals is taken as a badge of quality. Without the peer-reviewed articles, maybe Wansink wouldn’t be on CBS news etc., he certainly wouldn’t be getting government grants, and his work wouldn’t be used to make policy. Second, he and others can use the specific claims in his research to make decision and policy recommendations.

Here are a few scenarios by which Wansink could be doing more good than harm, in spite of all the problems with his research methods:

1. One possibility is that some of Wansink’s work is of high quality. Sure, those four papers are pure noise, and I wouldn’t trust a thing in them—really it’s no better than flipping a coin a few hundred times and spinning stories about the patterns you see—but it could be that he has lots of other papers that are good, and maybe the low-quality recent work is just him trying to keep up his research productivity after having run out of good ideas. Kinda like how some authors will keep coming out with a book every year or two, even after they’ve run out of anything to say. If this is the case, it could be that Wansink is still doing more good than harm, if his influence on policy is based on his earlier, high quality work.

2. Another possibility is that the work is all noise, but that Wansink and his colleagues have put together noisy, meaningless research results in a way that makes a coherent and scientifically true story. I think this really could be happening. The idea is that Wansink and others, based on a mix of insight and decades of careful qualitative observation and experiment, have come up with a good understanding of why we eat the way we do, and then they use these experiments as a way of filling in the picture. From the point of view, the point of something like the quantitative analysis of pizza-restaurant experiment was not to learn anything new, but rather to come up with further illustrations of an existing storyline, and also to stimulate new insights. This is similar to the idea that an astrologer or fortune-teller might actually be able to give good advice, with the tarot cards or tea leaves serving as a pretext or even a stimulus for real insights.

3. A third possibility, slightly different possibility is that the quantitative and the qualitative research is entirely irrelevant, but it’s still all good because the existence of the publications and the big-money consulting is being used to bolster common-sense messages regarding mindful eating and small portion sizes. Wansink’s research could be doing more good than harm in that, by strengthening his reputation and that of his lab, the research-cred enables him to go on TV and sell books encouraging people to eat sensibly.

And of course there are scenarios in which his research does more harm than good:

4. Researcher finds random patterns in noise, uses p-hacking and salesmanship to get published; published work is believed and is translated into policy; people follow bad nutrition advice and their health is harmed or, at the very least, their quality of life is harmed because they feel they should be following some arbitrary rules. Wansink’s recent publications were in obscure places such as the Journal of Sensory Studies, but unfortunately we can’t really ignore them because Wansink apparently really is an influential person in the area of food research, so his noise mining could end up becoming real policy.

5. Even if the effect of the research claims is neutral—thus, not advice that hurts but advice that has no consistent effect—the entire Cornell Food & Brand Lab enterprise could do more harm than good if it takes resources from more worthy endeavors. These resources include government grants, journal space (representing the attention of the scientific community), public attention, and corporate funding. Related to this is the idea that bad research poisons the well of trust in science, making it harder for more careful research to get respect and attention.

I’m not saying these five scenarios are equally likely. Indeed, to say so would miss the point, as I think all of them are true to some extent. It’s hard for me to sum them up because I don’t have a good sense of their relative importance.

Does our criticism of Wansink do more harm than good?

Again, I don’t know. I’ll try to approach the question by considering the five scenarios above.

1. Suppose Wansink’s earlier work was of high quality and that his policy influence comes from that work. In that case, our criticism could be a bad thing, in that by discrediting Wansink, it could reduce the impact of that early work.

2. Suppose Wansink has been, consciously or unconsciously, using his qualitative understanding to build a true picture of the world. In that case our criticism should help him, and should help his research field, by making him aware that this is his method, and pushing him toward more effective use of his qualitative data.

3. Suppose the research is irrelevant but has been used in a good way to bolster solid, common-sense advice about eating. In that case our criticism could be bad in the short term (in discrediting this particular carrier of the healthy-eating gospel) but I hope it would be positive in the longer term, by motivating people in this field to do some serious quantitative research on the topic.

4. Suppose this low-quality published research is actually doing harm, that people are making decisions based on exaggerated claims coming out of noise. In that case our criticism should be helpful in warning people away from the bad work.

5. Suppose the main effect of Wansink’s work is to suck the oxygen away from serious research into eating behaviors. Then I’d hope our criticism does good (by discouraging journals from publishing this work and discouraging news organizations from promoting it), but I could see how it could do harm, by discrediting scientific investigation more generally and thus dissuading policymakers from making use even of high-quality empirical work.

P.S. The concern is not just theoretical. For example, I see that Brian Wansink is delivering the Dr. Robert C. and Veronica Atkins Foundation Curriculum in Metabolic Disease Lecture next week at Cornell Medical College next week. Perhaps he’ll tell the doctors there about ways in which his pizza restaurant research is relevant to metabolic disease. Since this is such a select audience, maybe he’ll reveal to them his Plan A that he’s been keeping secret all this time!

66 thoughts on “Criticism of bad research: More harm than good?

  1. Worthy ideas. But they all are a form of the ends justifying the means. I don’t think we should evaluate research practice on the basis of whether the world is better off or not as a result of our research. For example, I have often been an expert witness – I strive to do good work, but I have been hired by a party to an issue. In the grand scheme of things, I can’t say whether the world is better off or not due to my work – winning a case may or may not improve the world. Doing bad work that helps win a case that improves the world does not, in my mind, in any way justify or improve the status of the work I did. At the same time, doing good work on a case that is lost (or won) and results in making the world a worse place in no way undermines the quality of my work.

    To hold evaluation of research to the standard of whether or not the world is improved is to abandon responsibility for the quality of that work – and, at the same time, make the evaluation of that work subject to exogenous (and perhaps random) factors. Research then becomes merely an extension of whatever policy view is the right one.

    If Wansink’s work helps people to eat better, that in no way improves the quality of his research. Nor would his good research be tainted if it was used by a food processing company to mislead people into eating worse diets. Wansink has a personal responsibility, I think, to consider how his work is to be used. I don’t think asking the research community to consider this when evaluating the quality of his work is appropriate, however.

  2. Every column inch filled with low-grade research is a column inch that is not filled with better research. Every appearance by a “superstar scientist” (who gets the grad student to do all the work, and then shows up and talks about it as if they did it all themselves) on Good Morning America is a missed opportunity to have a better scientist on there.

    There is also the problem that people don’t read the articles; they don’t even pay attention to the report in the newspaper or on the TV shows. They hear a few buzzwords and string their own meaning together. Then these stories take on lives of their own.

    I have spoken about Wansink’s “small plate” stuff to a few people in the past week. Two of them — both with college degrees — literally believe that the mere fact of eating their food from a smaller plate will activate some kind of psychological mechanism that will stop that food turning into accumulated body fat, *** no matter how much they actually eat ***. That’s how they think Wansink’s “small plates” effect works: their psychology is affecting their metabolism.

    Now, of course, Wansink made no such claim, but I think that researchers who put cute-sounding stuff out there cannot entirely absolve themselves of responsibility when (as is inevitable) someone gets the wrong end of the stick. Have a look at the operating manual that came with your car; it has warnings on every single page telling you not to do something insanely dumb, one warning for every feature of the car. Why? Because people will find ways to do dumb stuff, and car manufacturers know that they have a responsibility in this regard (and even if they don’t think they have a moral responsibility, they certainly have one in tort law).

  3. I am a believer that we, as a community, should criticize bad research. However I also believe that these particular examples (e.g., Dana Carny, Andy Yapp, Anna Derber, Eva Ranehill, Michael LaCour) all share the same confounding variable, they are all in the world of academe. I truly believe that, without addressing the elephant in the room of “publish or perish” that academic institutions have set up as a synonym for career trajectory or success in each and every critical conversation we have, we are doing the larger movement a disservice. In academic research conducted with the express goal of peer-reviewed publication (and nowadays, mainstream media publication), the paradigm of “publish or perish” needs to be addressed and changed in order for work quality to improve as a whole.

      • Shravan:

        I took a look at this paper and I disagree with their claim that “normal science” (in the sense of Kuhn’s “Structure of Scientific Revolutions”) corresponds to “low impact papers.” Whether impact is measured by citation count or by actual scientific impact, either way, normal science can be big impact. Indeed, the attitude that a paper has to be “revolutionary” to be important is itself, I think, a big part of our problem.

        That’s one reason, for example, we get serious journals publishing evidence-free claims such as, “That a person can, by assuming two simple 1-min poses, embody power and instantly become more powerful has real-world, actionable implications.” Such a claim really would be revolutionary, if true.

        Similarly, it could be interesting to see what papers were rejected by Journal of Personality and Social Psychology so they could have a space for Daryl Bem’s notorious article on ESP. Again, that one would’ve been revolutionary if it were actually correct.

        • Gravitational waves detected — 1505 Citations… clearly low impact. A gazillion authors so, individually, not even sure how that counts.

        • The causal claims presented in “Quality and/or Quality? The importance of Publishing Many Papers” are “interesting” given the current topic in this thread; the design and statistics are very poor, and there is absolutely no theory to explain the noisy correlations between the arbitrary productivity classes and the supposed effect of more “quality” due to more highly cited papers. Nevertheless big claims are made, media attention has been achieved, and potential policy changes are already on the agenda! In fact a lot of confounding factors are at play here including authorships (whatever it is these days), cumulative effects etc. Such factors provide you with citations albeit none of them reflect any special intrinsic research quality linked to a paper and its claims. Citations are first and foremost a network measure of use and it has a very special characteristic of “reinforcing the already established” – as such, citations and citation networks reflect “normal science” and truly revolutionary research, breakthroughs or whatever, will most often appear on the fringe of a citation network – certainly not in the core set of highly cited papers. That status will most likely, if at all, only be achieved after many years.

  4. @Dale Lehman: “But they all are a form of the ends justifying the means.”

    That cliche is unhelpful. If the end does not justify the means — what does justify the means?

    The end, of course, does indeed justify the means. The confusion arises because one usually neglects to fully define the end being sought.

    • If the end is bringing democracy to a country but the means is bombing the country back to the stone age, killing 100,000 people or so, and causing a civil way that causes millions of people to flee then I think it can be safely said that the end doesn’t justify the means.

  5. The scientific method is a pillar of our civilization. Passing off non-science as science undermines this pillar. It is one of the most destructive acts against your own nation, culture, civilization, and species one can take.

  6. I think it’s easier to discuss whether this specific research project and the associated papers did more harm than good, as opposed to looking at Wansink’s entire research program. This is because (as you say above) we have no good way of judging such a complicated mess of things, and because it wouldn’t provide good practical guidance even if we could come to an answer. Talking seriously about the moral implications of crappy research is complicated enough without simultaneously asking whether or not a research program is ‘on balance’ good. Wansink’s research is probably on balance good if he donates 80% of his paycheck to buy bed nets for people in malaria-prone regions, but that doesn’t change the fact that his crappy research does a lot of harm.

  7. Andrew:

    Besides your points, in my opinion, if there’s any big loss to society, it’s probably the opportunity cost: e.g. you could have been criticizing some mega trial on malaria or cardiac stents where a flaw can be more insidious and more damaging. And the collective intelligence of your commentators, I’m sure, would bring out good issues there too.

    Himmicanes, Wansink etc. are low hanging fruit. Most intelligent people are skeptical of this crap or pretend to believe in it because it suits some other agenda of theirs (e.g. more eyeballs for my pop sci article)

    The real meaty stuff lies elsewhere.

    • Rahul:

      This one’s a tough call.

      Let me put it to you this way. There’s a good reason we eat the low-hanging fruit first: these are the fruits that are the easiest to grab.

      I can criticize himmicanes, Wansink, fat-arms-and-voting, that horrible paper about air pollution in China, etc. with little effort because the work is non-technical, and this allows me to move more quickly into the general statistical issues such as forking paths and hierarchical models. I see my contributions in research criticism as almost entirely about these general points. It’s not so important to me to talk people out believing in astrology, power pose, etc. The only exception is in political science, where I’ve thought about the issues more deeply and I do have concerns with crap science used to justify cynical beliefs about voters (that elections are determined by shark attacks and football games, or that attitudes can be easily manipulated using subliminal images, etc.)

      I agree that it would be a useful contribution for someone to criticize public health research using the same principles that I’ve used to criticize junk social science. I hope that by presenting my work clearly and by opening it up to discussion (as here), this will motivate these others to do the job for public health research.

      • I appreciate your criticism of a wide range of work, but I do think it is especially helpful to see your commentary (and even re-analysis) of some of seemingly very credible cases outside of social-psych-style work. For example, I’m thinking of Case & Deaton on life expectancy, Chen et al. on air pollution in China, Heckman et al. on early childhood education, etc.

        • Dean:

          The frustrating thing about all three of these cases is that the authors of the papers in question refused to engage seriously with the criticisms. Case and Deaton gave hand-wavy explanations of why they didn’t adjust for age or sex, never acknowledging why these are standard practices. I’ve never heard that Chen et al. acknowledge the criticism in any way, even though they made a particularly embarrassing error. Indeed, last I checked, that paper was still prominently featured on the webpage of one of the authors. And I’m not aware that Gertler et al. ever addressed the problem that they were using a biased estimate, even after I sent the author of that paper a cordial email which I’ve been assured did reach the recipient.

          It’s almost as if they’re more concerned with public relations than science.

          Or, to put it more generously, they seem to feel that it would be a bad think for them to admit any gap in the perceived invulnerability of their research.

        • I can see how that is frustrating. But there can still be big benefits: I think those cases also resulted in substantial revisions to the beliefs of other social scientists — both about those specific claims, but also about the credibility of work in those areas by people generally known for their rigor. I guess this is a point about diminishing returns from criticism of an area: Almost anyone who is listening is now convinced there are serious problems in the small-n social psych literature.

        • @Dean: I agree that there *can* be benefits of the sort you mention — but I think it remains to be seen how many social scientists have had substantial revisions to their beliefs.

        • Martha:

          The target audience is not Case and Deaton, Chen et al., or Gertler et al. so much as the thousands of researchers out there who don’t have personal or professional ties to the work being questioned. I don’t have data on aggregate changes in beliefs but I agree with Dean that it does seem that attitudes have changed. The existence of some Susan Fiske-like die-hards doesn’t change this.

          I do think there’s more work to be done, though. As Dean says, at this point “almost anyone who is listening” knows to distrust papers on himmicanes, shark attacks, etc. But my impression is that most leading researchers in applied economics still don’t understand the point about Type M and Type S errors, the idea that published estimates are strongly biased and likely to go in the wrong direction. So I think we need more communication on that point.

      • > this will motivate these others to do the job for public health research.
        It _should_ and with their knowledge of the specific field it would be more profitable if they did it.

  8. Spillovers. Just because Wansink’s research might have net positive social benefits from Wansink’s agenda doesn’t mean it has aggregate positive social benefits. Every bad study makes the next bad study by someone else more plausible, since this new study can now cite Wansink. If Ted Kaczinski murders people to get publicity for his notion that technological progress is oversold, it doesn’t make it OK even if he’s right, not even if his position, once adopted, saves 50 people who would otherwise have been dead versus his three deaths and 23 maimings. First, the murders and maimings themselves are wrong. But secondly, you encourage the next wackjob to do the same thing, once we acknowledge that there net positive benefits, and there is absolutely no reason to expect the spillovers to be positive.

    Your astrologer example is pretty good, actually. Suppose I have the ability to help people with stress, but lack the patience to get psychotherapy credentials. So I hang out a shingle as a Tarot reader. And in fact I help people. Net positive? Not if it encourages people to see Tarot readers generally.

  9. Is it always necessary to begin criticism of science at the detail level of individual bad experimental practice? Power pose always looked to me like a self-help seminar for women (at least those lacking confidence). Cuddy even says that. Bringing science into it was part of the confidence building effort – we are confident in the results because we prove it with science! So science, instead of being a tool for exploration and discovery, was rendered an instrument of storytelling. If you watch the TED talk and cut out the clips “I am a scientist”, “Science proves it”, etc., you can still be left with a compelling personal story that inspires others. The science was primarily a means to an end( or so it seemed.) Not just storytelling, but the happy ending we want.
    In general, the criticisms of faulty scientific practice at the detail level remind me a lot of the fact checking that “good” political journalists perform. It’s after the fact “whack-a-mole”. All the right intentions, based on sound reasons, attention to necessary detail, and on point in every way. What could be wrong with that? Unfortunately, bad actors know how to take advantage of the shortcomings in that approach. Bad actors, even when not fraudulent, are the first to brush back criticism.

    • Chris:

      I think you’re pointing out item 3 in my first list above, and your comment echoes items 2 and 3 in my second list. I agree completely that Cuddy should’ve just given her Ted talk without claiming her ideas were supported by her scientific experiments. The tricky thing, though, is that perhaps without those published “p less than .05” results, she wouldn’t’ve been invited to give the Ted talk in the first place.

    • In the early waves of feminism, there was a theme that reforms such as giving women the vote would result in a higher moral caliber of society. Regrettably, this seems to have given way to emphasis on “confidence” and “power”, in the process ignoring the ethical considerations of science.

  10. I can speak to scenario 1. I’ve taken a look at some of Wansink’s earlier work (primarily his most highly cited papers), and quickly noticed problems in those as well. Someone could probably make a career of just going through all of his papers and tallying the errors. Unfortunately I’m a computational biologist and have better things to do.

    When considering if he is doing more harm than good it is important to not only think of the present, but also of the future. Bad science is like a virus, it propagates. Anyone who trains in Wansink’s lab will think that they are performing science the correct way (and why shouldn’t they? look at all the media attention they get), and these trainees will go on to be the head of research groups and then pass on their ways to yet more researchers. This, combined with an incentive structure that handicaps people performing careful science, will lead to most researchers performing bad science. It is hard to imagine society benefiting if every researcher is just throwing ideas against the wall, seeing which one can pass peer review, and then never confirming if what they published is actually even true.

    Even a broken clock is correct twice a day, and it is possible a bad researcher after 25 years could stumble upon something profound, but I just don’t believe any of the findings. Take the small plate thing for example. I don’t even use plates, I just eat my food out of a giant container and stop when I’m full, and yet I have a low BMI. Although I must admit I eat fairly healthy, shout out to Blue Apron. If someone could simply get people to cook healthy meals for themselves instead of eating take out they would do far more than Wansink has done for society in 25 years, and they wouldn’t have wasted any grant money, researchers’ time, or passed along bad science practices in the process.

  11. That’s a circular response. Ends are means in a different framework. Arguing that “ends of course justify means; it’s just that ends have to be defined”, skips the more interesting question of why you selected a given framework. Simply defining an “end” is no substitute for moral reasoning.

  12. When I go down to my local store to buy apples I am willing to accept the risk that some very small proportion of the apples that are on sale are rotten or may make me ill, but if that proportion rises to a certain level I may stop buying apples from that store entirely and may even tell my friends to stop buying apples. Some researchers are not only making people ill but are, through their dishonesty, persuading the public that apples in general are not worth buying.

  13. ” I see that Brian Wansink is delivering the Dr. Robert C. and Veronica Atkins Foundation Curriculum in Metabolic Disease Lecture next week at Cornell Medical College next week.”

    Not really that strange given that MDs are famous for knowing nothing about food, partly due to the way medical school post-degree hours are structured.

    • Paul:

      It’s not so strange at all, given that Wansink is a respected researcher, with hundreds of publications and, according to Google, over 20,000 citations.

      Or, very strange, given that Wansink recently published four different papers on a “failed study which had null results” (in his words), and these papers had over 150 errors among them, and nowhere in those publications did he ever mention the “null results” that motivated the papers.

      He’s either a serious researcher, or a complete fraud. Or both!

  14. One other reason I consider is that there are true empirical observations about the world, but filtering the science from reality is too noisy to be derived cleanly from a model. And while our brains have very high uncertainty, they are often able to filter out noise in a uniquely different way than formalized statistical models.

    Unfortunately, lots (most) journals aren’t interested in ’empirical estimation by brain,’ as a result the researchers (sometimes smart, sometimes stupid) then try to map what they ‘already know’ to a contrived experiment with a p-value. After all, I suspect most research on cognitive biases started out as ‘common sense’, and was then formalized and measured for good form.

    • > [brains] able to filter out noise in a uniquely different way than formalized statistical models.
      I agree and I actually believe better when uncertainties are greater but I am not aware of good ways to justify this [abduction/hypothesis].

      So I am disagreeing with Michael Lew that unmotivated pattern searching can be a profitable way to start research

      (CS Peirce was only able to come up with either we evolved to be good at this or we were mysteriously made that way.)

      • It’s nearly impossible to justify, but we also sorta know it’s true. Unmotivated pattern searching is profitable, since it’s what we all do by design. The difference is how well we train our ability to filter out ‘true’ patterns from the inherent noise of reality.

        Andrew Gelman isn’t so well regarded because he’s really good at strict Bayesian statistics (of course, he is). But because in addition to this he has a well trained, and prenatural, ability to map the patterns he observes to what is likely to be a true scientific (replicable, consistent) pattern. Whether it’s in medicine, psychology, political-science, etc.

        I think instead of taking a boolean approach towards whether unmotivated pattern matching is justified or not, we should instead realize it’s a trained skill that some humans are good at, some are bad, and most have to keep training to improve. It’s the prerequisite to all scientific research.

  15. Interesting thoughts and discussion. I have a similar issue over criticising Khan Academy videos. I guess they do some good, but they also do harm.
    I am currently helping a student critique statistics in Educational Psychology papers. Some of the things they do seem unsupportable. But who has the time to point out all these issues?

    • > Some of the things they do seem unsupportable
      Some of the things anyone does are unsupportable and it needs to be brought to light by someone (of course we all need to choose to spend our efforts wisely).

  16. As Dale Lehman says, bad research is bad research, period; and as Jonathan (another one) says, it creates spillover: “Every bad study makes the next bad study by someone else more plausible, since this new study can now cite Wansink.” Or as Jordan Anaya comments: “Bad science is like a virus, it propagates.”

    Adding to the spillover/virus concept: When bad research gets touted widely and loudly as a revolutionary finding, it sends the wrong message about what research looks like. Andrew observes: “Indeed, the attitude that a paper has to be ‘revolutionary’ to be important is itself, I think, a big part of our problem.” People come to expect that science will have some amazing effect on their personal lives. A market grows around that illusion and expectation. Bad research feeds the illusion, the expectation, and the market. So, beyond being inherently bad, it has bad effects on general public understanding of science.

    A case in point: Jane E. Brody’s problematic piece “The Right Way to Say ‘I’m Sorry'” concludes in a typical manner:

    “Beverly Engel, the author of ‘The Power of Apology,’ relates how her life was changed by a sincere, effective apology from her mother for years of emotional abuse. ‘Almost like magic,’ she wrote, ‘apology has the power to repair harm, mend relationships, soothe wounds and heal broken hearts. An apology actually affects the bodily functions of the person receiving it — blood pressure decreases, heart rate slows and breathing becomes steadier.'”

    Brody does not qualify or question Engel’s assertions. Instead, she chooses to end with the notion that apologies are like magic–in terms of their effects on bodily functions–if you do them exactly right.

  17. Another extremely political (and thus I do not agree with) “criticism of Wansink’s work: more harm than good” tally is deriding general public trust in science.

    It’s not hard to imagine a headline of “Top Columbia mathematician calls modern science funding a complete waste of money” (yes plenty of facts are intentionally butchered) and then providing links to various criticisms of various studies. This will undoubtably lead some percent of the population to believe that we will be much better off by cutting science funding.

    I’m not saying this should guide the decision process. Rather, I’m agreeing with Dale Lehman’s posting. That is, ideally science should be motivated by curiosity about how the world actually works, rather than short to mid term consequences. Once we start doing something like a cost-benefit analysis, our eyes quickly fall off the prize. I believe that Jennifer Barlett was also suggesting that unfortunately, we live in an academic system that does not necessarily reward genuine curiosity pushing the limits of knowledge.

    • “Another extremely political (and thus I do not agree with) “criticism of Wansink’s work: more harm than good” tally is deriding general public trust in science. ”

      Luckily, the members of the general public can decide for themselves if, and why, they trust certain science/scientists.

      By providing information that can be checked (like this blog does), members of the public can use logic, reasoning, and common sense (a quality academia and lots of academics have seem to have lost) to figure out what information to take seriously.

      The academics that care about the general public can help this process by publicly accessible pre-registration protocols, open data and materials, not hiding their work behind pay-walls, etc.

      Super simple stuff !

      • To be clear, I do think bad work should be publicly criticized.

        That people will take critical comments out of context is unavoidable. A scientist should be driven by trying to uncover knowledge, rather than concern about what others think of their findings. But we are entering an environment where this ideal is harder and harder to uphold. My only point is that we should not being using short-term cost-benefit analysis to decide how to handle scientific decisions.

        On another point, I completely disagree with you that using common sense, non-academic members of the public can figure out what information to take seriously. For example, take your litmus test:

        “The academics that care about the general public can help this process by publicly accessible pre-registration protocols, open data and materials, not hiding their work behind pay-walls, etc.”

        While these are good ideas in theory, this is a LOT of work or money on the researcher’s part. There are plenty of reasons not to follow these protocols that have nothing to do with shoddy work. As an example, my two first author papers are behind paywalls…because I did this work without grant money and did not have $5000 a paper to doll out so that non-academics could read my papers. Given that I doubt any non-academics really care how to speed up the EMICM algorithm for the interval-censored NPMLE, why would I spend 2 months of my own salary to allow them to?

        • “For example, take your litmus test:

          “The academics that care about the general public can help this process by publicly accessible pre-registration protocols, open data and materials, not hiding their work behind pay-walls, etc.

          While these are good ideas in theory, this is a LOT of work or money on the researcher’s part.”

          I think i never said anything about something being a litmus test, i certainly did not mean to imply that there are decisively indicative tests in deciding what science to take seriously and why. I think every little bit of information can help in the overall assessment of scientific work, i just gave some possible examples that can be checked and assessed.

          Aside from that, luckily their now exist plenty of opportunities for researchers to post pre-prints of their (to be) published work completely free of charge, which does not take a lot of time.

          “Given that I doubt any non-academics really care how to speed up the EMICM algorithm for the interval-censored NPMLE, why would I spend 2 months of my own salary to allow them to?”

          You mean your salary that the general public/tax payer probably pays for? If that is indeed the case, then perhaps that alone would be reason enough for you to make your work available to the general public. You never know who might be interested in it.

          “Children must be taught how to think, not what to think” – Margaret Mead

        • “You mean your salary that the general public/tax payer probably pays for? If that is indeed the case, then perhaps that alone would be reason enough for you to make your work available to the general public. You never know who might be interested in it.”

          This quote seems to imply I have ripped off the taxpayers by not paying several thousand dollars out of pocket to pay the cost for open access. This is veering off course from the topic, but I cannot help but clarify a few things.

          Why I find this especially annoying:

          1.) While the paper is behind a paywall, the final product is not. The algorithm is freely available on CRAN, bundled up in an easy to use package. The time put into that code is non-trivial.

          2.) Several people have contacted me with questions about how to use the code I made available. I have always responded promptly (within less than 24 hours) and as helpful as possible. No one has ever contacted me about how the algorithm actually works (that’s what’s in the paper), though I would be quite happy (and surprised) to share with anyone who did. In general, the idea of “hiding behind a paywall” is misguided; your strongest critics are the ones who usually the ones who already have access to the work (through their university, etc.)!

          3.) Most relevant to why I find this accusation irritating, I have never been paid a penny for any of the work that went into said paper. This is work I did on the off-hours to further my own career (none of my bosses were even aware of this work). I was quite literally doing work for free. The suggestion that I should have to pay to share the work I have done for free is cruel. This may not be the situations most researchers are in, but it was mine.

          4.) To suggest that someone like a grad student, postdoc or lecturer should pay this much from their salary is just nuts, given how much they make. More generally, if you give someone a salary, you cannot demand some of it back because they were successful at work (and got no increase in income because of it). That’s just weird. Yes, some researchers may pay from their own pocket to make their work open access out of being good natured, but expecting this across the board is delusional. Would you charge a barista for supplies if they sold too many coffees at work one day?

          And keep in mind, I’m pretty convinced that even if I had paid the several thousand dollars required to make it open, I still doubt that even one more person would have read that paper! Would love to be proven wrong on that account.

        • I think we’re not communicating very well, as i reason you misinterpret things i am saying, and write things that don’t make any sense to me. Luckily, the reader can read everything, and make up their own mind. Thank you for sharing your thoughts!

        • I think i never said anything that mentions “ripping of the taxpayer” or “a grad student, postdoc, or lecturer paying this much from their salary”. Apparently, i am not being clear. Here is a final attempt at being clear:

          If we assume, and agree, that in a lot of cases the general public/tax-payer pays for scientists to do their work, it could be considered appropriate for the general public to have access to this work. Please note that i used “perhaps” in my original comment about this, and i now use the word “could” (to try and indicate that not everyone necessarily agrees with this, and to indicate that this represents a certain view of science/scientists in general). I have tried to make this general point clear by talking about “sharing” work (which does not necessarily mean that scientists need to pay for open access out of their own pocket), and by talking about one possibility to do this free of charge: pre-prints.

        • Anonymous:

          Fair enough, completely agree with the last statement.

          My only complaint by the earlier comments is that it sounded as though you were suggesting that scientists are intentionally hiding behind a paywall. I think it’s safe to say that this is a big mix up about how the academic world works. Every scientist I’ve ever met (and myself) wants as many people to read their papers as possible. If they are outside the academic environment, that’s even cooler! We are very unhappy about the fact that not everyone can review our work (I believe it’s safe to say that the grumbling about paywalls started inside academia)…but many of us may not be willing to miss out on paying rent to allow others to read our work.

          Be upset about paywells, totally in agreement there. But the blame doesn’t belong to the people producing the work. We don’t get any money for having our papers behind a paywall…but we have to pay a whole lot of money to be on the other side. We’re pissed off about it too, and actively trying to do something about it. For example, see the Journal of Statistical Software, where one of Andrew’s recent papers was published.

        • I (miss)took the Anonymous’ comment as a response Cliff not want to pay it out their salary.

          “why would I spend 2 months of my own salary to allow them to?”

          You mean your salary that the general public/tax payer probably pays for?…

    • > Once we start doing something like a cost-benefit analysis, our eyes quickly fall off the prize.
      Sure a misguided or uniformed cost-benefit analysis will lead one astray but science has to be communal and its only real outputs are tentative findings (to be further re-worked) and so its critical that the communal process generating these tentative findings is made to be and maintained profitable (in terms of accelerating the getting less wrong about reality to facilitate more actions that are not frustrated by reality).

      “The theory here given rests on the supposition that the object of the investigation is the ascertainment of truth. When the investigation is made for the purpose of attaining personal distinction, the economics of the problem are entirely different. But that seems to be well enough understood by those engaged in that sort of investigation.” -Peirce, C. S. (1879). Note on the theory of the economy of research. Report of the Superintendent of the United States Coast Survey Report, 197–201.

  18. Failing to criticise work, of any kind, because “it’s heart is in the right place” will lead to work whose only redeeming feature is that it is consistent with the beliefs of the gatekeepers. Its flaws will be readily apparent to those who do not agree with it, and many of those who are on the fence will think “if these are the reasons for X then X must be false.” I invite the reader to see if they can find examples of this sort of reaction in today’s politics. I myself believe that many of my political and religious views, forty years on, are a reaction against badly argued articles in the campaigning newspaper my Father bought, and very boring and very stupid sermons that I was forced to sit through every Sunday.

  19. Andrew you should be careful with the kinds of things you say. You didn’t read any of the four papers… and you make strong claims about their validity. Please, watch out…

  20. I really do think Brian is a serious guy, doing serious research. He is being super professional about the criticisms:

    see this last comment on his blog:

    ‘Dear Blake,

    Thank you for your very well thought out and thorough reply. I like your three point path and thank you for including the Moran (2002) reference. For these field studies that are looking to confirm either a finding from the lab or to suggest a new relationship, I think your second and third suggestion are great.

    In addition to doing the types of test you mentioned I think there’s other key things that can be done. When visiting with a Stats PhD earlier today he also suggested being very explicit about whether a conclusion was based on an a priori hypothesis, or based on exploration. I’ve also met with my Lab and with these big studies like this, we’re going to start registering them on (which I just learned about). (There’s some other SOP “lessons learned” I mentioned up in my addendum post.)”

    • Jack:

      I do think this response from Wansink is good. As I wrote in one of my other comments on this thread, preregistration will create new problems for Wansink in the future, as it will become much more difficult for him to find statistically significant results if he starts preregistering everything. So I think it would be a good idea to take the next step and recognize that the problems in his papers are not just about “not being explicit” but that he’s had a workflow which is based on finding patterns in noise.

      But, yes, I agree that the response is a good start, much better than we’ve seen from Tol, Bargh, Baumeister, Cuddy, Fiske, etc.

  21. Wansink is simply the victim of a campaign against his nutritional stances that oppose those being promoted by the Laura and John Arnold Foundation. Same problems arose with Tina Weinz and the BMJ. Nutrition has become politicized. This has got nothing to do with replication, or value, IT has to do with competing interests and visions for the future of US nutrition programs.

    • ok, i was being nice up to now… but If Wansink plays that stupid victim card, I will take back everything I good said and personally make sure his paperd are retracted.

    • This comment by Anonymous seems to imply that the Laura and John Arnold Foundation are in some way bankrolling the research that Tim van der Zee, Jordan Anaya, and I have been doing here. I would like to assure Anonymous that, just as I had never heard Brian Wansink’s name until I saw his blog post in mid-December 2015, I had never heard of the Laura and John Arnold Foundation until someone pointed me to an article about John Arnold in Wired, a couple of days before our preprint went live.

      I am fairly confident that neither of my co-authors knew much about this foundation either, and even more confident that, like me, they have not received a penny from it, or indeed even an inquiry as to whether we might like to receive one or more a pennies from it. The conflict of interest declaration in our preprint is complete and comprehensive. We have very little interest in US nutrition programs, and indeed only one of us is an American citizen.

  22. Having had a former Cornell business school professor at another school in a graduate program, I’m thoroughly convinced that Cornell’s business school is nothing more than a glorified kindergarten. That man was the least intelligent, laziest person I’ve ever had the displeasure of meeting.

Leave a Reply

Your email address will not be published. Required fields are marked *