Skip to content
 

“News Release from the JAMA Network”

A couple people pointed me to this:

Here’s the Notice of Retraction:

On May 8, 2018, notices of Expression of Concern were published regarding articles published in JAMA and the JAMA Network journals that included Brian Wansink, PhD, as author. At that time, Cornell University was contacted and was requested to conduct an independent evaluation of the articles to determine whether the results are valid.

Cornell University has notified JAMA that based on its investigation they are unable to provide assurances regarding the scientific validity of the 6 studies. Their response states: “We regret that, because we do not have access to the original data, we cannot assure you that the results of these studies are valid.” Therefore, the 6 articles reporting the results of these studies that were published in JAMA, JAMA Internal Medicine, and JAMA Pediatrics are hereby retracted.

Admirable of Cornell University to bite the bullet on this and for Jama to publicize it.

P.S. More here from Retraction Watch, including a couple of ridiculous quotes by Wansink.

23 Comments

  1. Matt Skaggs says:

    And now Wansink has resigned from Cornell. After reading Cornell’s statement throwing him under the bus, I can scarcely believe this is happening. Academic review committees pretty much only generate white wash, who did Wansink cross that he shouldn’t have crossed?

    • Anonymous says:

      “And now Wansink has resigned from Cornell. After reading Cornell’s statement throwing him under the bus, I can scarcely believe this is happening. Academic review committees pretty much only generate white wash, who did Wansink cross that he shouldn’t have crossed?”

      I have wondered about this as well.

      Perhaps it’s not that Wansink crossed anybody, but more that 1) the whole thing got so much attention, and 2) that the general public might be “catching on” and/or be more critical and aware of what can go on at universities that it’s best for Cornell to get rid of him.

      Also see the top “reader picks” of the comments on this NYT piece about Wansink, and the following quote of it: https://www.nytimes.com/2017/10/23/upshot/the-cookie-crumbles-a-retracted-study-points-to-a-larger-truth.html#commentsContainer:

      “Last, if you’re wondering why Cornell didn’t recognize that the IRB protocol didn’t correspond to the published results, figure that Cornell collected at least $1,000,000 in overhead from those grants. That money paid the salaries of the IRB staff and lots of other things in Wansink’s Department. Cornell didn’t notice because it didn’t want to notice. If Cornell is anything like my university, it measures research by dollars raised, nothing else.”

      • Matt Skaggs says:

        In a typical misconduct scenario, a rival professor challenges the work of another professor with something like this:

        “Professor Wansink committed academic misconduct in his research and scholarship, including misreporting of research data, problematic statistical techniques, failure to properly document and preserve research results, and inappropriate authorship.”

        A review committee is marshalled, they interview a few people, dawdle for awhile so interest will die down, and then a member says something like this:

        “Cornell has completed an exhaustive investigation of Professor Wansink. I can say that the investigation found no fraud, no theft, no plagiarism, and no sexual misconduct or Title IX issues. I understand that Cornell and Professor Wansink mutually have decided that Professor Wansink’s research approach and goals differ from the academic expectations of Cornell University…”

        Except that the first quote came from the review committee, and the second came from John Dyson standing up for his colleague.

        • Anonymous says:

          “Except that the first quote came from the review committee, and the second came from John Dyson standing up for his colleague.”

          The following source states that Wansink himself provided that 2nd quote and says it came from Dyson if i understood things correctly. The source states that “The statement concludes with a comment Wansink attributes to John Dyson, which we are working on confirming:” and then goes on and mentions your 2nd quote.

          https://retractionwatch.com/2018/09/20/beleaguered-food-marketing-researcher-brian-wansink-announces-his-retirement-from-cornell/

          Anyway, i now reason Wansink may have just been an amateur. If Wansink would have wanted to make his university even more money (if that is indeed what happens with large funding, and related overhead costs, and who knows what kind of money tranfers), he could have tried to come up with some large-scale “collaborative” (global?) project. Think about the kind of money that would involve! Perhaps trying to set up these type of projects could be a very useful way for un-tenured folks to receive that precious tenure at their universities!

        • Andrew says:

          Matt:

          “No fraud, no theft, no plagiarism, and no sexual misconduct or Title IX issues,” huh? Why not add “no arson” and “no armed robbery” to the list? There are so many offenses that were not committed here! Also, no doping—take that, Lance Armstrong!—no forgery, no driving without a license, no resisting arrest, no insider trading, no shoplifting, no jaywalking, . . .

    • Andrew says:

      Matt:

      In what sense did Cornell’s statement “throw Wansink under the bus”? It seems pretty accurate to me. I don’t think it’s right to say that not whitewashing is equivalent to throwing someone under the bus.

      Or, to put it another way: That bus? Wansink was driving it. Not just driving it. Dude built the bus himself.

      • Matt Skaggs says:

        “In what sense did Cornell’s statement ‘throw Wansink under the bus’?”

        Well, they did shoehorn in “problematic statistical techniques.” ;)

        When the whitewash is on, they are “novel statistical techniques.”

        Your metaphor is more apt, dude got backed over by his own bus. Maybe the fall was inevitable after he tried to stop the backward momentum by stuffing his own graduate student under said bus.

        • Anonymous says:

          “Maybe the fall was inevitable after he tried to stop the backward momentum by stuffing his own graduate student under said bus.”

          I have recently started to wonder whether academia at large are throwing (graduate) students under the bus in general. If i understood things correctly, only a very small percentage of (graduate) students will ever become a full professor with a steady job. If this is correct, are university staff, and professors included, playing a part (possibly with good intentions, or at least without much awareness) in kind of messing up (graduate) students professional lives?

          I don’t think if i were a professor i could look at my students and teach them stuff when i know (or at least assume at this point in time) the majority of them will never get a full-time, steady job in what i am actually teaching them. As a former psychology/research student myself, i have concluded my “education” is at least 90-95% useless outside of the very specific niche of being a psychological researcher.

          I am not sure if my reasoning makes sense though, but it’s what i have lately started to ponder. I am pretty sure i could never teach psychology, or research in general, at a university because of it.

          Also see: “How academia resembles a drug gang” https://www.researchgate.net/publication/272247082_How_Academia_Resembles_a_Drug_Gang

  2. Jordan Anaya says:

    You might find this newer retraction interesting. Just when you think you’ve seen it all there’s more. This paper was retracted for reporting a variable which the paper itself claims wasn’t collected.
    https://twitter.com/jamesheathers/status/1070174206451703808

    • Anonymous says:

      This may sound weird, but i mean well. I think you might be smart, and could possibly do something more useful and/or enjoyable than following all this psychology and twitter stuff.

      I myself have recently started to wonder if it’s some sort of addiction (just like checking mobile phones, and “social media”). I think i have fallen into the possible “addictive” side of following all the “discussions” and things like that.

      I have wondered whether it could be more fruitful, for science and myself personally, to focus attention on “good” science and/or something completely different.

      I don’t want to sound like i know what’s up, or that i think what’s best for something or someone. I just want to share these thoughts in the case they may resonate with you, or anybody reading them, at this point in time.

      • This makes sense, its easy to forget a certain percentage of researchers are very diligent, careful and are striving to get least wrong about the world. Learning how to be like that and getting better at it is not a breeze but requires expertise.

        Now, it would be nice to have some estimates of the actual percentages of such researchers in various fields. The would require or at least be best assessed with random audits of work done by researchers. That would likely show that published work comprise of lower percentages rather than higher (the wow factor).

        But here is the problem, not enough people are (yet?) convinced that things are bad enough to require such “drastic” action. I remember repeatedly being unable to convince researchers that data entry errors were really bad and things like double data entry are very worthwhile. Repeatedly (e.g. at the University of Toronto, only two of the five or six clinical research institutes did double data entry).

        So documenting how bad things really really are is necessary. Of course, not everyone has to focus on this.

        • Anonymous says:

          Yes, thank you for the reply and saying that documenting how bad things are is necessary. I, at least partly, agree with that!

          It’s just that i wonder whether there is a certain point at which the possible positive effects of doing that might be less and less likely. I also wonder whether in certain cases, the possible positive effect of doing that might be less likely.

          My reply to mr. Anaya was also personal in that i sincerely wonder whether he (and perhaps others) could possibly think about what they want to do, and why. I mean this concerning (improving) science, but also very much concerning other things. What if cleaning up other people’s mess gets in the way of building and creating something useful or beautiful…

          This perhaps also fits with your reply and the possibility of the work, and existence, of very diligent, and careful researchers. Perhpaps there are many, but they are not in twitter, and they may not be talked about, or their work may not be read. I wonder if keep talking on “twitter” about possibly “bad” science or scientists in little groups of folks who all reply to, and re-tweet, and like eachother is really useful.

          I have been reading a translation of the “Tao Te Ching” lately, and a book by Bruce Lee. In both, or one of them, i read some stuff like “When achievement is completed, fame attained, withdraw oneself. This is the Tao of Heaven” and “Do just enough, and nothing extra”. Perhaps it is also because of my own countless attempts at getting a certain point across (like your example concerning double data entry) that i now wonder whether at a certain point it could be best to just say, write, and do what you want to do, and then walk away…

      • Anoneuoid says:

        I think it is a stage.

        At first it still seems novel and interesting to discover wrong stuff in the “respected” literature. Then you realize it is almost all wrong stuff, what you focused on earlier was just the most egregious examples of standard logical/methodological errors, and the very rare actual correct stuff becomes more interesting.

        • Andrew says:

          Anon:

          Yes. But also interesting is that thousands of scientists are wasting their time, and our tax dollars, on cargo cult science. I think it can be valuable to study the patterns of mistakes, in part because it helps us catch these errors in other papers, in part to understand the entire system which has so many problems, and in part to get ideas for future improvement. Concepts such as “researcher degrees of freedom” and “type M error” which have been developed in large part in reaction to research scandals and bad work, can be useful more generally.

          Also, not everyone would agree with your statement that “it is almost all wrong stuff.” Actually, I don’t know that I’d agree either! I’m just not really sure what’s out there, and there’s so much selection bias in what we see.

          • Anoneuoid says:

            Reminds me of the Meehl 1967 paper[1] which clearly explains the reversal of scientific logic going on by testing the strawman null model.

            Later on Meehl said that was his most popular paper, he got hundreds or thousands of requests for it (sorry, don’t remember where he mentioned this) and no one had ever explained what was wrong with it. However, it clearly did nothing to stem the flow of flawed research. If anything, the practice became much more popular in the time since.

            So I just don’t have any misconception that identifying and explaining the errors will lead to a more reliable literature.

            To me the starting point is to make it standard to run direct replications (following as closely as possible the published methods so there should be no excuses). NIH did this a couple years back for a couple treatments for spinal cord injury and only 1/12 results were replicated.[2]

            Time and again we see results like this: 50-90% of what is published fails to replicate. So retracting 6 Wansink papers because the raw data can’t be found seems pretty token. I mean more than half (32/50) of the cancer reproducibility studies had to be dropped because it became too expensive to even figure out the methods used, and they need to count “partial” replications as successes to get a rate over 50%.[3]

            If journals start retracting or putting warnings on tens of thousands of papers at a time I will think there is progress. In the end the number of retracted papers should be in the millions, so I doubt this method of dealing with the problem is going to work.

            [1] http://meehl.umn.edu/sites/g/files/pua1696/f/074theorytestingparadox.pdf%5B1%5D [2] https://www.sciencedirect.com/journal/experimental-neurology/vol/233/issue/2
            [3] https://www.the-scientist.com/news-opinion/effort-to-reproduce-cancer-studies-scales-down-effort-to-18-papers-64593

          • Martha (Smith) says:

            Andrew said,
            ” I think it can be valuable to study the patterns of mistakes, in part because it helps us catch these errors in other papers, in part to understand the entire system which has so many problems, and in part to get ideas for future improvement. Concepts such as “researcher degrees of freedom” and “type M error” which have been developed in large part in reaction to research scandals and bad work, can be useful more generally.”

            I agree.

            Anoneuoid said,
            “more than half (32/50) of the cancer reproducibility studies had to be dropped because it became too expensive to even figure out the methods used, and they need to count “partial” replications as successes to get a rate over 50%.[3]

            If journals start retracting or putting warnings on tens of thousands of papers at a time I will think there is progress.”

            Thanks for the link and summary. Despite your pessimism, I think that the article you cite can be an important part of next steps — in particular, the article needs to be publicized. It does show just how difficult the job is. But it is an important job to continue with the push to improv quality of research studies..

      • Jordan Anaya says:

        Twitter is my only source of science news and how I met all my collaborators. I guess the question is whether I would have been better off never joining Twitter and just focusing on my own research. I think with enough motivation I could have produced a Nature Genetics type of paper given the data I have access to, but it’s not like I would have cured cancer or something.

        P.S. In college one of my college friends was shocked I enjoyed wrestling (WWE). He couldn’t understand how one of the top students was entertained by something so dumb. I’m not sure what the relation between intelligence and ability to be entertained is.

        • Anonymous says:

          “I’m not sure what the relation between intelligence and ability to be entertained is.”

          I wasn’t trying to suggest there is a relation. I was just questioning whether it could be useful, for you and others (including myself), to possibly think about how and why to spend your energy and time concerning all this “bad” science and “bad” scientists, but also concerning things in general.

          Maybe your comments here reminded me of something in myself, and i was just talking to myself. I don’t know. I didn’t mean anything negative by it. I was sincerely wondering things.

          • Jordan Anaya says:

            I think exposing yourself to bad science helps you eliminate it in your own work. As a grad student I didn’t know anything about the replication crisis, p-hacking etc., I was just interested in analyzing data and publishing papers. I’m not saying I was p-hacking or selectively reporting, but I didn’t fully appreciate how an entire field could be led down the wrong path by people publishing what people wanted to hear instead of what the data was saying.

            Given how easily it is for scientists to fool themselves and each other, I appreciate open data, code, and reproducible analyses a lot more than before. I used to view posting code as a nuisance–“Who would bother repeating all these analyses?”

            For example, the first paper I wrote in grad school doesn’t have any open code attached:
            https://bmcbiol.biomedcentral.com/articles/10.1186/s12915-014-0078-0

            In computational biology it’s actually a lot more complicated than in psychology to make your work easily reproducible given the size of the initial files, all the intermediate files, the length of the pipeline, computing resources required, and software that needs to be installed.

            So if I wrote that paper now I don’t think it would be much more reproducible, but I would post the pipeline (with code) that I used, which I assume no one would try to run, but who knows.

            To illustrate this point we can look at my popular OncoLnc paper. I posted all the code I used for the analyses:
            https://github.com/OmnesRes/onco_lnc

            I even posted the clinical data in the repository, but didn’t upload all the sequencing files given their size and GitHub has a 1 Gb data limit per repository. And besides, the sequencing files were really easy to download from https://tcga-data.nci.nih.gov/tcga/, so why should I bother hosting them?

            Well, that website doesn’t exist anymore, and were moved to the GDC, where it’s a little more complicated to download them. And actually, I get some emails from people wanting to redo the analyses and they ask where to get the sequencing files. I’m not really sure what the best practice is here. If you are analyzing publicly available data should you be required to host the instance of the dataset that you analyzed when you publish a paper? Depending on the size of the data set that could get expensive to host.

            • Anonymous says:

              Quote from above: “Given how easily it is for scientists to fool themselves and each other, I appreciate open data, code, and reproducible analyses a lot more than before”

              I think scientists might be fooling themselves and each other (and non-scientists) again with all this emphasis on open data, code, and reproducible analyses.

              I still don’t understand what their worth is WITHOUT a pre-registration that is solid, and includes statements concerning the analyses, time stamps, etc. In my opinion, and reasoning, a detailed pre-registration is much more valuable than open data, materials, and code.

              I have brought this up on this blog before, but to further try and illustrate my point here is the data concerning the “False positive psychology”-paper by Simonsohn, Nelson, and Simmons: https://openpsychologydata.metajnl.com/articles/10.5334/jopd.aa/

              Now, what if the authors of the “False positive psychology” paper wrote their paper about one of their findings that listening to The Beatles “When i’m 64” (or whatever they found) makes you younger. Without a detailed pre-registration they could simply write about that finding like they predicted it from the start, and then post their data, and code. Should i then have to “trust” or “believe” that listening to The Beatles “When i’m 64” makes me younger because they have open data, available code, and i have found their analyses to be reproducible? I would reason not!

              What i (still) don’t get is why i should care very much about open data, code, and reproducible analyses WITHOUT a detailed pre-registration. I may not be understanding things correctly, but without pre-registration i could still p-hack the sh@t out of my variables and other data. I could then write a nice paper saying i predicted it from the start, post the data, post the analysis-code, and hereby making sure i am being “open” and that everyone can “verify” my analyses so it is “reproducible”.

        • Martha (Smith) says:

          Jordan said, ” In college one of my college friends was shocked I enjoyed wrestling (WWE).”

          I recall being amazed that my father (who was an engineer and spent much of his free time building or repairing useful things) enjoyed watching wrestling on TV. Then my mother mentioned that he had done amateur wrestling himself in college. Years later, he mentioned that when he spent summers working on his uncle’s farm as a boy, one of his jobs was “ringing the pigs” — i.e, putting rings in the piglets’ noses so they wouldn’t “root”. So I conjecture that his experience wrestling pigs translated into being able tp wrestle human beings, and interest in watching others do it. (So what’s your excuse? :~) )

    • Andrew says:

      Jordan:

      1. Yeah, this seems consistent with my impression of the Pizzagate lab as practicing an extreme division of labor, in which data collection, data storage, statistical analysis, writeups, and publicity were done by different people in only loose communication with each other. There were many examples where the numbers in the published articles could not possibly have come from real data, and other examples where the authors of the paper had no awareness of how the data were collected.

      2. I’ve seen this before: published papers that appear to describe data that were never collected. The example at the above link is extreme, but consider this example that came up on the blog earlier:

      – A paper called “The effects of prenatal testosterone on wages: Evidence from Russia” which . . . get this . . . had no measurements of testosterone! I don’t think this one was ever retracted.

Leave a Reply