Best correction ever: “Unfortunately, the correct values are impossible to establish, since the raw data could not be retrieved.”

Commenter Erik Arnesen points to this:

Several errors and omissions occurred in the reporting of research and data in our paper: “How Descriptive Food Names Bias Sensory Perceptions in Restaurants,” Food Quality and Preference (2005) . . .

The dog ate my data. Damn gremlins. I hate when that happens.

As the saying goes, “Each year we publish 20+ new ideas in academic journals, and we appear in media around the world.” In all seriousness, the problem is not that they publish their ideas, the problem is that they are “changing or omitting data or results such that the research is not accurately represented in the research record.” And of course it’s not just a problem with Mr. Pizzagate or Mr. Gremlins or Mr. Evilicious or Mr. Politically Incorrect Sex Ratios: it’s all sorts of researchers who (a) don’t report what they actually did, and (b) refuse to reconsider their flimsy hypotheses in light of new theory or evidence.

44 thoughts on “Best correction ever: “Unfortunately, the correct values are impossible to establish, since the raw data could not be retrieved.”

  1. It really seems like a retraction would’ve been more appropriate in this case; without the original data it’s impossible for anyone (including the authors) to know which figures are correct and which are erroneous.

    • This was not a “correction”. It was an admission of incorrectness.
      So now there is a paper in Food Quality and Preference which warns the readers that “Here are some results of our research, and by the way they’re not correct or meaningful results. But we drew these conclusions anyway.”

  2. I had already seen this correction through email since I had contacted the journal about this paper.

    Most journals I’ve contacted haven’t told me if corrections will be issued yet, but I am privy to a couple, and like the pizza publications a lot of discrepancies in sample sizes and methods are revealed.

    I’m probably not supposed to talk about specific corrections that haven’t been posted yet, but if you are impatient I guess I could email you.

  3. This is pretty shameful. What’s worse, is that we continue to pressure scholars to deliver high volumes of original research. Sloppy stuff like this is the ultimate result.

  4. I’m really looking forward to the explanation for how two nearly (but not quite) identical tables of summary results were produced from ostensibly different data sets. That jumped out as the most shocking and damning finding in the Wansink Dossier.

    • Well, Wansink on his website claimed that wasn’t a big deal since a master’s thesis was followed up on with more participants, and adding a couple hundred participants just happened to give the exact same results as the original study.

      However, it is mathematically impossible for that to be the case, and indeed, I am privy to information that this is not the case, but I’m not aware of any impending corrections.

        • It’s not available on their site anymore, but I did save a PDF of the website while it was there, not sure I can post the PDF publicly without violating Cornell’s copyright or something, but here’s what the text I’m referring to said:

          “Since this review work began in mid-February, a few new claims about my research have been made that are not quite as substantial, and can be responded to much more quickly. Distilled, I have been accused of reusing portions of my own work in later papers on the same or related topics. The observation is, of course, true; in one instance key paragraphs from a journal article were
          intentionally re-emphasized in related works five times over 25 years; in another a review article that intentionally summarizes a body of my work includes pieces of earlier articles; in yet another a master’s thesis was intentionally expanded upon through a second study which offered more data that affirmed its findings with the same language, more participants and the same results.”

        • Yikes! I guess if you know as little about statistics as Wansink, you might not think this was absurd.

          I bring this one up because it seems like the least defensible of all the things your team caught. It seems clear that the 2003 table is the 2001 table with the multiplication errors corrected. So it wasn’t just a “whoops, pasted in the wrong table!” issue – someone went in a double checked everything and made a few adjustments. I’m having a tough time brainstorming up an explanation that doesn’t suggest intentional fraud. Maybe one person thought the 2003 paper was based on the 2001 paper’s data set and did a little cleaning up, while someone else who knew the results were supposed to reflect new data didn’t notice the other person was still using the old numbers? That’s the best I can come up with.

        • It definitely seems like papers in this lab are written as if they are on an assembly line.

          One person is in charge of collecting the data (and doesn’t get acknowledged), someone else is in charge of writing (or in many cases, copying and pasting) the background and discussion. Someone else is probably in charge of putting in the tables and statistics. And someone else submits the paper to the journal.

          Evidence for this assembly line approach can be found in the PubPeer comments where Wansink mentioned the person submitting the paper accidentally wrote the wrong author contributions for one of the pizza papers.

          So as you said, I think it’s entirely possible for the wrong statistics to get inserted into the wrong paper, as apparently happened in this case (or neither study exists and the tables are just made up).

          Further evidence can be found in the citation practices. For this pizza paper: “Peak-end pizza: prices delay evaluations of quality” the reference list includes multiple self-citations such as these:

          Just, David R. and Wansink, Brian (2011), “The Flat-rate Pricing Paradox: Conflicting Effects of ‘All-You-Can-Eat’ Buffet Pricing”

          Just, David R., Sigirci, Ozge and Wansink, Brian (2014), “Lower Buffet Prices Lead to Less Taste Satisfaction”

          However, looking through the paper I can’t find where any of these self-references are cited. It’s almost as if they wrote the reference list before writing the paper, and just assumed they were going to cite themselves. To include a citation in a reference list you are actually supposed to cite it in the text, right?

          If you want to read about further citation nightmares you can check out one of my posts:
          https://medium.com/@OmnesRes/rickrolls-and-russian-dolls-the-world-of-mindless-citing-fda5d0b0a51e

        • Thanks, I read (and enjoyed) your post on citations the other day. It reminds me of the many times I’ve attempted to find the original paper that some mainstream news article is purportedly based on, and ultimately discovering that there are about 5 different news articles that all just cite each other while referring to a paper that obviously none of the reporters have read. This is unfortunate but not surprising behavior for mainstream journalists; I would hope (naively?) for better from Ivy league tenured professors.

          If your assembly line hypothesis is correct, then the most damning thing about the copy / pasted tables is Wansink’s explanation for them. Just saying “Whoops!” or “I don’t know how that happened but I’m looking into it” would have at least been honest. Claiming to have quadrupled the sample size and gotten identical results is pathetic.

  5. I get most of the attacks on Wansink. But I don’t like this one.

    I don’t have access to any of the data I analyzed 12 years ago. Heck, I don’t have access to all of the data I analyzed 3 years ago (job change)!

    If we want researchers to be open to criticism and honest about how they address said criticism, we can’t jump down their throats when we get an honest attempt with an honest answer. If he doesn’t have the data from 12 years ago, what is he supposed to do? Make up new values?

    Of course, I realize this is scooping out water with a thimble from a sinking ship in regards to Wansink’s reputation. But I don’t like this particular attack paired with all the “well, researchers need to be open to criticism” stuff. Attacks like these are why people don’t like to be open to criticism.

    If you think this is evidence that the results were completely fabricated (could be, but I don’t see this piece as strong evidence), that’s a different story.

    • Cliff:

      I have a couple problems with Wansink’s statement. First is that he got tons of things wrong on this and many others of his articles, and he’s only now issuing corrections under duress. If you’re publishing dozens of papers with errors, it’s time to slow down, not to proudly declare, “Each year we publish 20+ new ideas in academic journals, and we appear in media around the world.” Second, there’s the big, big problem that so many of the numbers in his papers are wrong, yet Wansink appears confident that none of his substantive conclusions should change. Third, some of his published results appear not to be consistent with any data, which suggests that some of these numbers could be manipulated or simply made up. I have no idea.

      To sum up, as I said in my post, there’s a problem with researchers who (a) don’t report what they actually did, and (b) refuse to reconsider their flimsy hypotheses in light of new theory or evidence. I do not see Wansink as being particularly open to criticism or honest about how he addresses said criticism.

    • The data used in a publication ought to be archived as part of the publication process. Even better would be that it be made available publicly, as a supplement to the publication.

      It ought to be impossible to publish an analysis of data without publishing the data as well.

      Take my word for it, I know what I’m talking about, and I work at the NUMBER ONE RANKED university in FANTASYLAND.

  6. I think some of us need to ask ourselves at what point will we be satisfied that Dr. Wasnik has been sufficiently humiliated, that our pound of flesh is large enough, and we can focus on tearing down someone else next? In all seriousness, it really seems like there’s quite the pitchfork mob here. Dr. Wasnik’s reputation has been sufficiently sullied; the science vigilante crew has ensured this will be hounding him for the entire remainder of his professional career. ‘grats. Now it just feels like a pile-on. What are you all getting out of this?

    • Joe:

      Wansink took a few million dollars of public funds to produce a bunch of meaningless publications. He also had a government job at one point. His work has received tons of uncritical media exposure. He’s now a tenured professor at one of the country’s leading universities. And he continues to make exaggerated claims about his research. I think that’s a problem. I don’t give a damn about humiliation or sullying or pounds of flesh or whatever, and I don’t see any pitchforks nearby. I think it’s a scandal that our government and Cornell University have been supporting this work. I’m not “getting anything out of this,” anything more than you’re getting anything out of posting a comment here. I’m bothered by a consistent pattern of changing or omitting data or results such that the research is not accurately represented in the research record. Scientific research is my business and I don’t like to see it abused.

      That said, I can certainly understand if this story doesn’t interest you very much. I’m not interested in ice hockey, so when the sports pages are full of hockey news, I just ignore it. Similarly, if you’re tired of hearing about research misconduct, you could skip over the blog posts on that topic. It makes sense to me that these stories might seem kinda pointless to people on the outside.

  7. I noticed this correction got covered by Retraction Watch, and it motivated me to cover a recent pizza paper correction: https://medium.com/@OmnesRes/worst-correction-ever-70c5e126d688

    As humorous as this whole thing has been, it does get really depressing at times. How incompetent and how deceitful do you have to be to get fired in academia? What other fields would tolerate this?

    No really, I was thinking about this and a few come to mind. There’s the Wall Street people who are too big to fail, and perhaps that’s the best analogy here.

    It is fun to try and think of others. James Dolan uses the president of operations to basically distract people from his incompetence, and as a result it should be impossible for someone to screw up so badly that they would get fired. But Phil Jackson managed to do it, but perhaps purposely since he’s going to be picking up a free 24 million dollar check.

    It is entertaining to think that Wansink and other zombies like Bem are purposely performing research as badly as possible to show how ridiculous the field is, and we’re all just going to get Punk’D one day and the Sokal hoax will be put to shame.

    • Jordan:

      I think it’s really really hard to fire a tenured professor. Sometimes you do hear about people leaving academia, for example Marc Hauser, Sudhir Venkatesh, Seth Roberts . . . all these people were controversial and made lots of enemies, two of them I knew personally. The usual pattern seems to be that the prof who’s in trouble negotiates some deal to leave, so then both sides are happier. My guess is that Cornell would love it if Wansink would leave, but this gives Wansink some bargaining power! If he really does end up leaving his job, I assume he’d be able to extract a pretty big parachute from the university. More likely, I suspect he’ll stay on, with fewer government grants and fewer TV appearances and a devout wish by the Cornell administration that people stop staring at all those papers that Wansink wrote where the damn numbers don’t add up.

      I followed your link, with all the exhausting detail about how the numbers didn’t add up, and it reminded me of that paper we discussed a few years ago that claimed that women were three times more likely to wear red or pink during a certain time of the month (see here for the story).

      One of the frustrating things about this episode is that the paper in question described days 6-14 of the cycle as the days of peak fertility—but the standard advice is that days 10-17 are peak fertility. So they weren’t even measuring what they were saying they were measuring! And, when I pointed this out, and linked to references to this day 10-17 thing, they just brushed me aside; they didn’t seem to care that they’d messed up on their key predictor variable. Instead, they were latching on to the idea to their general theory (which of course can be instantly adapted to apply to pre-peak fertility) and to the validation from their statistically significant p-value.

      The pizzagate papers remind me on this, in that Wansink et al. don’t seem concerned. You’d think they’d be bothered that they weren’t measuring what they thought they were measuring. But, no, they’re trying to minimize the consequences, even before fully digesting the problems. (Sorry for that food metaphor; these are hard to avoid!)

      I’ve remarked on this sort of thing before. When people point out an error in my work, I’m like, Shit, what happened?? Gotta track this down. But then we see these people who learn about massive errors in their measurements, and they don’t bat an eye.

      Don’t get me wrong—I’m not saying the ovulation-and-clothing researchers are the same as Wansink. They seem to have much more control over their data. But there is this common feature that a research claim comes in some way from data, but then when the data are revealed to have problems, the claim just kinda floats off on its own, unmoored to its original evidence.

      • I thought about this a bit, and I think part of the problem is the obsession scientists have for their own ideas.

        On the one hand, coming up with cool new ideas is what drives most work. What else will motivate people to go out and collect data? But when the data doesn’t support the idea is when things start to go south.

        In grad school I was exposed to PIs who were in love with their own ideas (or ideas they somehow convince themselves were their own), and put as many postdocs or grad students as necessary on the project to prove the hypothesis. And when data came back negative it wasn’t because the idea was wrong, it was because the grad students were incompetent.

        I’m also guilty of getting excited by ideas or projects, but even once you get data that supports your hypothesis you need to be thinking about how you can “pressure-test” the data in the words of Wansink.

        –I’m super curious what this pressure-testing was that Wansink spoke of for the pizza papers. Two-tailed versus one-tailed tests? If I got data that directly contradicted one of my older publications there’s no way I would publish it as if it completely fit in with all my ideas. If I was convinced it was more trustworthy than my older publications, then I would say my older papers were wrong and should no longer be trusted, but I guess that wouldn’t help my H-index and I guess that’s why I’m not an Ivy league professor.

        • Jordan:

          Yeah, this has really been bugging me recently. A scientist who doesn’t even care about the data he or she gathered—this seems like a baseball player who doesn’t want to improve his swing. The quality of data is so central to empirical science: by not reacting to problems you’re shutting yourself off from improvements. For example, if you want to understand eating behavior, you’d want to accurately measure what people eat, right? Or if you want to understand behavior during days of peak fertility, you’ll want to know when these days are, no? To return to the sports analogy: sure, lots of pro athletes cheat with drugs—but they’re cheating in order to improve their quality of play. They still want to do their best, right?

          So what’s going on here in science? I think it’s worth pulling on this thread here, as it seems to me to be a big deal, in that lots of scientists seem to react so negatively to criticism, not just speculative criticism about their ideas, and not just technical criticism about statistics, but very direct criticism about what they know best, their own data.

          To put it another way, at some level I can understand researchers not wanting to hear questions about their open-ended storytelling—they always seem to act as if that their latest data-informed story represents the hypothesis that most logically follows from their theory—and I can understand them not understanding criticisms about their p-values, as they’ve been trained to think of statistical tools, not as methods to understand. After all, I can use a hammer without knowing any of the principles of metallurgy that allowed the tool to be built. But I’m still struggling to understand the way in which researchers are unfazed by problems with their own data.

          Wansink I can kind of understand, in that he seems to have something like a pyramid scheme going on, with tons of funding and publicity all based on previous publicity, which itself is based on . . . it seems like not very much at all. So maybe he’s gotta hold fast to everything, in a desperate attempt to hold it all together.

          But what about the others? What about Wanink’s collaborators, for example? Why don’t they speak up? What does it mean to care about eating behavior, when you don’t even know what you’re measuring? Why was Dana Carney’s retraction of that power pose paper such an unusual event, in a field where there have been so many high-profile papers that have not held up to theoretical and empirical criticism?

          One big difference, I guess, is that in sports there’s an empirical measure of success. If you’re out there, day in and day out, taking your swings and you’re batting .150, then all the bullshitting in the world won’t keep you on the roster. In science, though, you can do a Wile E. Coyote and stay suspended in midair for a really long time.

          This “Wile E. Coyote” effect could have two implications.

          First, most obviously, there’s less of an incentive to get things right. If what you’re after are the trappings of success in sports—the cheering, the fame, the big contract—you better work on the quality of your play, as this is a minimum requirement for all that follows. In science, though, there are other models of success: you can try to follow the path of various bigshots who achieved tenure and Ted exposure through some combination of sleight of hand and honest mistakes that were just too good to correct. To put it another way: in sports, even if you don’t care about the quality of your play, it cares about you. In empirical science, you can try to directly influence the publication and reception of your work, bypassing the step of ensuring data quality.

          The second implication of the Coyote effect is that, even if you legitimately care about your object of study, you can get fooled by the incentives and get off track. For example, in that post of mine from a couple of years ago, I’d lamented that the ovulation-and-clothing researchers just didn’t care about getting the dates of peak fertility right. They responded with annoyance that they did care. And maybe they did. They’re just working within a world in which scientific advances are seen to occur from p-values in a way that becomes unmoored from the actual data used to make these claims.

          In short: Wansink and his collaborators (and, before them, Richard Tol) represent an extreme case of data being not what they seem, pretty much the most you can do without flat-out fabrication. But I think there’s something much more general here. Scientists not seeming to care when their data are not what they thought: This is happening a lot, and it’s really really weird. It’s so commonplace that people don’t seem to realize how wrong it really is.

        • You previously wrote about how “Honesty and transparency are not enough”.

          With regards to the Coyote effect, open data will make it more difficult to bat .150 and act like you are Barry Bonds, although I guess you could still be in a Goldilocks position where no one cares enough to look at the data or code you are making publicly available.

          It’s actually a little concerning how common the Goldilocks position is, we are simply too trusting in science. For example, I routinely get contacted by journalists about preprint statistics. Why do people trust my preprint stats are correct? I don’t know, I guess it’s because I say they are and no one has said they aren’t.

          But then again, how would someone check that my preprint indexing is correct? This gets to the “complexity” of current research. Even with open data and code the expertise and time required to check results in inhibitive.

          For example, look at Wansink’s released pizza data set. When it got released I saw numerous comments from people asking if anyone had performed an analysis of it. Do you know who reanalyzed it? No one except me, and there are hundreds, perhaps thousands of people with an interest in this story.

          Given my knowledge of the pizza papers and my coding experience it was easy, but if someone wasn’t already deeply familiar with the pizza papers and knew how to code how would they have checked the data?

          I really don’t know what the solution is. Even with open data it is still really easy to be Wiley Coyote since not many people have the time, expertise, or motivation to look over other people’s work. The only thing I can see helping is establishing task forces that randomly audit researchers who have a certain level of fame/funding. This way Coyotes will know there’s at least some possibility an independent organization could come in and expose their scam.

        • Jordan:

          Part of this is the high variation and small N problem in the human sciences. With hundreds of plate appearances per year, Mario Mendoza couldn’t pretend to be Barry Bonds. An intermediate case might be Tim Tebow, who was playing for the NFL despite lots of criticism, in part, we were told because “he won football games.” With only 16 or so games in the season, you look like a winner in part by luck.

          When it comes to Wansink or those ovulation researchers or whatever: suppose they’d claimed something like cold fusion which is supposed to work every time. Then they get their results out there, people fail to replicate, game over. Instead, though, we have theories that aren’t really theories, failed replications that are labeled as victories (search this blog for John Bargh), and a lot of huffy posturing along the lines of, Hey, are you accusing me of cheating??

          But there’s no “batting average” to compare people to. Barry Bonds cheated in a way to make him a more effective baseball player. If his drugs hadn’t seemed to work well for that purpose, we can only assume he’d’ve changed drugs. In contrast, when Wansink or whoever fumble their data, it doesn’t really matter to them because their claims are ultimately not so data-based. Weird.

          What really puzzles me is laboratory biology. There I’d think it would be easy to replicate studies, try things on lots of cultures or whatever, and get unambiguous results. But, from what I’ve read, all these replication problems I’ve heard about in psychology, occur in the bio lab as well.

        • I think the problem in biology is overblown and that the COS isn’t helping by blowing smoke on the wrong aspect of the problem. I think biologists need to start asking whether it is worth reporting results that only show up under specific conditions, results which they were searching for. And if they do report such results be more honest about how they were found.

          For example, look at STAP cells. When researchers couldn’t reproduce the technique the supervisor for the project killed himself.

          In contrast, in psychology when your important results don’t replicate you talk about how you are interested in better understanding the limitations of the phenomenon and continue to write best selling books and continue to give talks at conferences. It’s really quite amazing.

        • It happens in the bio lab because there are a million fiddly things that can go wrong.

          1) This gives an easy excuse to drop any results you don’t like, so it is normal practice to cherry pick your results.

          2) The actual methods are extremely complex and pretty much never explained very well (since no one has been bothering to replicate stuff for decades).

          Regarding the STAP cells, the original issue was actually incorrect charts/figures. After that came the failed replications: http://www.nature.com/news/acid-bath-stem-cell-study-under-investigation-1.14738

          Also you can see the “failed replications” were not direct replications since they changed cell types, etc. So, if it wasn’t for the claims of fraud, the original work could have remained safe indefinitely (they even say so in that nature article).

        • Andrew said: “What really puzzles me is laboratory biology. There I’d think it would be easy to replicate studies, try things on lots of cultures or whatever, and get unambiguous results.”

          My impression of lab biology is that is that “I’d think it would be easy to replicate studies, try things on lots of cultures or whatever, and get unambiguous results” is naive — that there is lots that can go wrong — e.g., stuff in the air might contaminate cultures; bacteria can mutate so it’s not what you think it is; minor temperature variations can make a big difference, etc. And, of course, there’s the custom of small samples, and thinking that has a hard time really accepting that things aren’t deterministic.

        • Martha, I agree with you. Biology is inherently variable and lots of things can “go wrong” so that you’re not even studying the thing you think you’re studying, for example, someone sends you a cell line, but the line is exposed to some mutagen (say X ray examination of the package at the shipping company, or the difference between ground shipment and air carrier where radiation is higher on the airplane) and you spend months studying these strange cells that don’t do what the original lab claimed they did… Later they re-send you the line and the package isn’t x-rayed and in fact you finally get the cells to behave in the way you were told they would. Total cost: $80,000 worth of postdoc salaries, cell culture reagents, interrupted weekends spent feeding your cells (you can’t just walk away from cell culture for a weekend, gotta come in and change the media daily).

          This kind of thing could be avoided by say sending three different batches of cells via three different carriers… but biologists don’t have risk management built into their everyday thinking, and transportation issues are just *one* of many many ways in which things can go wrong. Furthermore there’s a lot of red tape associated with much of this stuff (gotta sign material transfer agreements, get the lawyers at the university to look over it all, pay extra to get the stuff shipped on Dry Ice… whatever).

          My wife recently had to ship a package, she took it with her on the way home and dropped it off at FedEx. it was on dry-ice, she wanted it to get there early the next morning. So… of course by the next morning it wasn’t in the tracking system. What the heck? She calls the FedEx place and they say “oh, we don’t accept dry ice shipments at this location so it’s just sitting here on the desk”. So she goes first thing, picks up the package takes it to her lab, re-packs in dry ice, and sends it out via the FedEx pickup at her lab… It gets there later that day (it’s just maybe 20 miles to the facility she’s sending to). But someone at the facility just lets it sit under their desk until the end of the day, and when they open it the dry ice is all gone and the package is at room temp. Should she trust the data from this package? I have no idea. Maybe it’s fine, maybe it’s completely ruined.

          This kind of stuff happens *all day long every day* in biology. And, many times you might not even know it happened. The shipping clerk is hardly going to go out of their way to let you know they botched their job.

        • Andrew:
          You wrote: “But I think there’s something much more general here. Scientists not seeming to care when their data are not what they thought: This is happening a lot, and it’s really really weird. It’s so commonplace that people don’t seem to realize how wrong it really is.”

          I have a rant that is somewhat related to this.

          People talk about methodological terrorists as if we are picking on researchers because of fame, jealousy, etc. (see Susan Fiske’s talk).

          Do you know who gets asked questions about their research? Everyone! (assuming the research is popular).

          For example, I constantly get emails about the data in OncoLnc. When this happens I drop everything I’m doing, look into their question, and my response usually involves showing them the data I’m using: https://github.com/OmnesRes/onco_lnc/tree/master/tcga_data

          Do you know what I don’t do? I don’t ignore emails, I don’t tell them to go away.

          If I did then the person should go write a blog post about how they think there is a problem with OncoLnc. I would have deserved it.

          I bring this up because I can’t seem to stop “bullying” Wansink, c.f. the comment I just posted at the bottom of this thread. I recognize it’s odd to continue to point out the problems with a researcher’s work, but then again it’s also odd for a researcher to have this many problems (or at least I assume, more work needs to be done in this area).

          When is enough enough? Presumably at some point I’ll be able to find numbers that don’t add up and not feel the desire to blog about them. I just don’t feel any of the responses to our criticisms have been adequate, not Cornell’s response, not the journals’ responses, and definitely not Wansink’s response. So what other option do I have?

          If they would make an honest effort to fix the problems then it would be easier to look the other way when I notice problems.

          P.S. An adequate response to any of our criticisms would involve at a minimum immediate release of the data set (if it exists, which in multiple cases likely doesn’t). It took them months to release the pizza data set, and we are in the process of trying to get another data set, and it is also taking months.

        • Jordan:

          Here’s another way to put it. In a narrow sense, it’s rational for Wansink to stonewall, mislead, etc., because at least in the short term this might stave off a complete demise of his reputation as a researcher. But it’s not so rational for people like you, me, Nick Brown, etc., to look at Wansink’s papers carefully: first, it’s a waste of our time that could be better spent doing real science; second, we risk making enemies out of powerful people such as Fiske and we also risk engaging the antagonism of middle-of-the-road tone-police types.

          The trouble is when you apply game theory and take things to the next level. The Wansinks of the world know that it’s essentially irrational of people to waste their time and risk making enemies opposing them, hence Wansink can try the strategy of surviving and even thriving by never admitting anything. This is assuming his goal is fame and fortune, not research discovery—but that’s not really a problem if long ago he’s convinced himself that whatever he writes about eating behavior etc. is correct because he’s an expert.

          I had this problem many years ago when dealing with a plagiarist. When I confronted him I assumed he’d just say he was sorry. I mean, the evidence was clear: he wanted to publish, under his own name, results that someone else had told him. This other person had literally dictated formulas over the phone to this guy. I was simply stunned when the plagiarist refused to give appropriate credit, but then I figured he’d made the calculation that I would make the calculation that making a fuss about this would be too damaging to my career and so instead I’d do the rational thing and just let it go. And he was right.

  8. Not to quibble but pre-PED Barry Bonds was likely the greatest all-around baseball player by some margin. Doped Barry Bonds was a video game character!

    • Jordan:

      I followed the link, and all I can say is, Wow. Just wow. I suppose Wansink’s defense would be that he was joking during the talk, or joking in the book, or joking in the research article, or, ummmm, I dunno. Maybe he’d say that he can’t remember exactly what happened, the study happened so long ago? The guy’s a one-man Rashomon.

      Actually, yes, I guess it doesn’t matter what Wansink’s defense would be. I’d guess he’ll never need to offer a defense of these wildly contradicting claims.

      But, yeah, at some point I’d think the administration of Cornell University would start to get bothered by all this. Wansink’s been playing them, converting Cornell’s reputation as a quality university into cold hard cash for himself and his lab.

      By the way: regarding on the 70, variables, or 103 variables, or whatever: Wansink p-hacked, there’s no question about that, and his analyses are as worthless as his data. But, speaking generally and in the context of good data and serious research, it could be fine to gather all those variables to see what correlates with different behaviors. The way I’d do it is using a hierarchical model.

        • Just to clarify: the hierarchical model, if done well, can be as “confirmatory” as any other analysis. But, yeah, regarding Wansink, I agree 100% that if you’re gonna look at 103 variables you should list ’em all. That’s what appendixes are for.

          This reminds me of another challenge when criticizing a study: when they don’t give you all the information, it’s hard to criticize because it’s a moving target. For example, suppose a paper reports a correlation between variables X and Y, and you question it, saying, “Who knows how many other questions were asked in that survey,” then the researchers can reply, all insulted, that actually their survey had only those 2 questions and how could you be so rude as to suggest otherwise. Or, if you question it the other way, by asking why they went to the trouble to do a survey and then only asked 2 questions, they can reply, all insulted, that of course this was part of a longer survey.

          The issue here is that, as a critic, you have to be super-careful to avoid making mistakes, but because you don’t have all the information available, you have to detective work and make very cagey and awkward statements, such as “Either they asked lots of questions and reported only two, which reeks of forking paths, or they asked only two questions, which seems odd given all the many possible explanations for their data.”

          People like Wansink or Tol, who write papers where the correctness of any data point can be in question, are particular beneficiaries of this Schrodinger effect, as their work never quite stands still enough to be pinned down with a specific criticism.

Leave a Reply to Jordan Anaya Cancel reply

Your email address will not be published. Required fields are marked *