The view that the scientific process is “red tape,” just a bunch of hoops you need to jump through so you can move on with your life

Summary

Awhile ago I hypothesized that many researchers “think they already know the truth, and they think of discussions of evidence, data quality, statistics, etc., as a sort of ‘red tape’ or distraction from the larger issues.” But now I’m thinking that it’s not just statistics but really the entire scientific process that they view as an annoyance.

I think that a lot of social scientists don’t really care about the process of science, that their attitude is something like 50% careerism and 50% that they feel already know the answers about primatology or criminology or whatever, and they view activities such as faking your data as just the sort of annoying things you need to do to get papers published.

To put it another way: I suspect that these people see the entire scientific process—not just statistical analysis but also surveys, experiments, the construction of coherent theories, pretty much the whole megillah—as a bunch of red tape, just a bunch of hoops they need to jump through so they can move on with their day and get their 8 hours of sleep every night.

Or they might enjoy some activities of the scientific life, such as designing and carrying out experiments, writing papers, going on TV, etc., but they don’t get the connection between the fun parts and the hard work and inevitable setbacks.

To these people, faking your data is not like robbing a bank. It’s like driving 5 miles an hour over the speed limit, or like not filling out some silly form. Of course they don’t think people should be “punished” for faking data. Also, they think that correcting or retracting an article or having your public errors pointed out to you is a form of punishment.

Background

We talked awhile ago about the scandal in criminology of a set of research published published based on misrepresented, falsified, or nonexistent data.

Justin Pickett tells the full story here. His article is at Econ Journal Watch (“Scholarly Comments on Academic Economics”), which I don’t quite understand, given that these papers are in sociology and criminology journals, not economics journals. Maybe in a parallel there’s another journal, Crim Journal Watch (“Scholarly Comments on Academic Criminologists”) which published papers about scandals in economics? But I guess you have to publish things where you can. It’s good for Econ Journal Watch that they published this. From Pickett’s article:

Dr. Eric Stewart and his coauthors have retracted five articles from three journals, Social Problems, Criminology, and Law & Society Review. . . . The retraction notices are uninformative, stating only that the authors uncovered an unacceptable number of errors in each article. Misinformation about the event abounds. Some of the authors have continued to insist in print that the retracted findings are correct. . . . The findings suggest that the five articles were likely fraudulent, several coauthors acted with negligence bordering on complicity after learning about the data irregularities, and the editors violated the ethical standards advanced by the Committee on Publication Ethics . . .

Beyond our interest in the sordid details here, our horror at what Pickett had to go through in pursuing this case (he was a coauthor of one of the articles in question, and once he started asking questions they refused to share the data with him), and our concern that understanding and policymaking in the important field of criminology is being driven by fake data and fake analysis, there is the question of the general relevance to social science of these extreme cases of scientific misconduct and institutional bad behavior.

There are many levels of bad behavior in science. Without naming names, I’ll list some different things we’ve seen, in roughly declining order of badness of the behavior:

– Flat out fabrication, people writing about surveys that never existed, reporting data that were never collected, etc.

– Attacking or otherwise trying to silence critics.

– Hiding or destroying data so claims can never be checked.

– Reporting or summarizing data inaccurately and misleadingly. (This includes, for example, not mentioning key decisions in data exclusion, coding, and analysis.)

– Dodging criticism, not responding to criticism, changing the subject, etc.

– Inaccurately and misleadingly reporting implications of data, for example by having the title or abstract make claims about something not addressed in the data, or conversely by not mentioning key conditions or limitations of the conclusions.

– Misrepresenting or omitting relevant literature.

– Poor or irrelevant statistical analysis, for example making inappropriate claims from noisy data.

– Weak statistical analysis, not noticing or reporting notable patterns in the data.

These things can be done by accident or on purpose; for the goal of understanding the research, it doesn’t really matter which. Recall Clarke’s Law. If researchers persist in poor scientific practices, it’s a problem.

Anyway, back to our question above: How should we think of these pathological examples of bad behavior such as described in Pickett’s article? Is focusing on these cases a mistake, somewhat as if we tried to understand the U.S. criminal justice system by looking at how they handled Jeffrey Dahmer, O. J. Simpson, and Ted Bundy?

I’m not sure. One thing I will say, though, is that many of the problems described by Pickett occur, in less extreme form, all the time. I’ve seen these problems over and over.

Let’s start with the leadoff to Pickett’s article:

• The retraction notices are vague, providing little information about what went wrong.

• The authors have continued to promote their retracted findings in print, insisting that “the main substantive results are correct” . . .

• Other articles by the authors have some of the same irregularities . . . but thus far only one of these has been corrected and none have been retracted.

Yes, we’ve seen all these things:

• Retractions are vague, as if the whole thing is some horrible embarrassment rather than the normal practice of open science.

• Refusal to admit specific errors, followed by a refusal to admit the errors have any effect on conclusions. As I wrote the other day, these people always make a big deal about their data, science science data data experiment experiment statistical significance bla bla—until the moment the data are revealed to be wrong, and then it turns out the data didn’t matter. Next time don’t bother with the experiment and the data and the p-values—just go straight to the conclusions. Cut out the damn middleman.

• Offenders are repeat offenders. We’re familiar with that pattern.

And then, other things mentioned in Pickett’s article:

• The attitude that, once something’s published in a peer-reviewed journal, it’s assumed to be true unless conclusively demonstrated otherwise.

• Scholars putting personal loyalty above the search for truth.

• Academic bureaucrats and game-players taking a technical dispute (What were the data? Did the data exist? How did those numbers get there?) and turning it into something personal.

• Attacks and name-calling against dissenters.

• Repeated attempts to avoid scientific concerns (for example, impossible numbers in a published article) by retreating into procedure.

• Incompetence that is almost charming (for example, not-so-random terminal digits in tabulated numbers).

You might say that none of this matters because this research was never being used for policy. But maybe it was! Remember that Brian Wansink received millions of dollars in government and corporate funding and was appointed to a government post. And influential authors and legal scholars have written gushingly of Wansink’s work.

I guess the point is that there’s a lot of this—indeed, I think there must be a lot that we haven’t heard about.

Put it this way: if someone at one of these universities had been caught robbing a bank, would his colleagues have all backed him up like that? I assume not! I assume they’re backing him up because, at some level, they don’t care about the process of science, that their attitude is something like 50% careerism and 50% that they feel already know the answers about criminology or whatever, and they view activities such as faking your data as just the sort of annoying things you need to do to get papers published.

That’s a general issue, and I think we learned something from considering this example carefully. This kind of cavalier attitude to what is pretty much the purest sort of scientific fraud makes it clear how little the scientific process is valued, at least in some quarters in academia. And I don’t think it’s just Florida State University, given the inclination we’ve seen of the University of California, Arizona State University, Harvard, Cornell, etc., to look away from clear cases of scholarly misconduct.

It does seem like the general view of many administrators, journal editors, and prominent scientists that the two reasons for the scientific process are (a) to advance people’s careers, and (b) to confirm people’s theories. I’m guessing that most scientists don’t think that way, but this silent majority does not have the power and organization and inclination to fight the establishment.

This kind of sounds like a conspiracy theory but I don’t mean it to be. My guess is that all the participants in cargo-cult social science—from the fabricators to the powermongers to the enablers to the just plain incompetents—sincerely believe they’re doing good science, in the all-important (to them) sense that they’re building beautiful careers and confirming beautiful theories. No conspiracy needed. They’re just doing what’s right, and occasionally they have to waste some of their valuable time swatting away people like Justin Pickett and me and other data thugs, methodological terrorists, and Stasi replication police who insist on some letter-of-the-law paperwork-laden view in which science is about openness, transparency, and data.

34 thoughts on “The view that the scientific process is “red tape,” just a bunch of hoops you need to jump through so you can move on with your life

  1. We are seeing this process where someone already has the answer which is being held up by nitpickers who want reproducible evidence being played out in real life in our current pandemic. A dozen treatments that are sure to work with near miraculous efficiency have been touted. Good idea without evidence = bullpuckey.

  2. Wouldn’t this be solved if the rewards for research focused on the process over the results? Is it even possible to establish a reward system based on that?

    • Very good question. I was heavily influenced by Frank Von Hippel’s take on the quality of science and attendant careerism thatis entailed in science.

      I don’t think we have much choice but to construct a more viable reward system. In its absence, we do have some scientists that are honorable and have succeeded in improving decision-making. They are often financially independent. Have tenure or retired.

    • “Wouldn’t this be solved if the rewards for research focused on the process over the results?”

      That doesn’t seem like a great idea. If we think individuals like Wansink and Stewart are bad because they went through a flawed process and produced faked results, what would we think of the kind of poseurs who would become scientists if all one had to do was fake the process? Instead of science as a contact sport (e.g., goals for and against in hockey) we would get science as a glamour fest (e.g., costumes and makeup in figure skating).

  3. I don’t think most of them are convinced they’re actually doing good science or uncovering truth. I think most fraudsters are motivated by career reasons. Whether it’s a desperate postdoc fudging data or an powermad professor fabricating entire studies.

    • Adede:

      First, I’m not just talking about fraudsters; I’m also talking about people who make research errors and don’t seem to understand that these errors are important. Second, it’s my impression that the career reasons and the truth goals go together. For example, Brian Wansink. I’ve never met the guy, but I’m inclined to believe that he thought his research, publications, TV appearances, etc., were helping the world. Sure, a lot of the purported research was a scam, but I doubt that he thought he was in the business of painting rocks and selling them as jewels. I’m guessing that he really believed all the stuff about eating behavior and just didn’t have the patience to study it carefully. And, of course, if you condition on the assumption that these interventions really work, then arguably it’s moral behavior to cut corners on the experiments so as to be able to sell the ideas better. I’m sure he enjoyed the fame and fortune that came with all this, and maybe that was his primary motivation, but it seems plausible that he was a true believer too.

        • Mikhail:

          There’s some video . . . it looks like he really did build the soup bowl. It just doesn’t seem that he did the experiment quite how he described it. I think the idea of doing the experiment was fun to him too. Carrying out the experiment, that’s another story.

          It’s a lot easier to be Jules Verne that to be Captain Nemo.

  4. From my perspective, you are far too kind to the majority of scientists. It seems there is a general misunderstanding about the distinction between an experiment and a demonstration. This comes to a head with p-values; the goal of most power calculations is to create a convincing demonstration rather than a decisive experiment. Similarly, with controls; the key is to cut off attacks [pro forma or well-meaning] on your demonstration rather than provide external validity, for example. So, many things are held ‘as constant as possible’ to demonstrate the effect of some factor with the highest p-value.

    The question of how to animate curiosity and make it thrive is hard. It’s easy to make the point of the experiment first to convince yourself, then to convince others, finally, to gain fame and fortune. But this is a very dangerous logical path that ultimately sacrifices the science.

  5. I think by and large, the people I’ve worked with do think they can discern evidence (or lack thereof) for their theory when examining the data. But they are also very concerned with what you might term not fooling themselves, as manifested by wanting to adjust for confounders, test for violations of model assumptions and looking for potential failures of randomization.

    That said, they most certainly view the process of getting published and getting future studies funded as “red tape” and “jumping through hoops”. I’d say at the point in analyzing a dataset where we (myself and the investigators) feel we have a good handle in what the data have to say w.r.t. our research aims we have probably done about 20-30% of what will ultimately be required for seeing the results in print. And a great deal of the remaining 70% is simply indulging customary “red tape” that at least one reviewers will always demand. Including plenty of p-value decision rules.

    I doubt very much that I know the best practices for assessing the evidence while not fooling myself, especially when dealing with certain kinds of data that only rarely arise. But there’s only so many working hours in a career and the bulk of that absolutely must be spent doing the things required for publication and funding. Even if that time would have been better spent (in the abstract “doing science” sense) on approaches at odds with the same old, same old methodology and “red tape”.

  6. Frankly most social scientists do not have adequate reasoning (statistical, mathematical, or else) skills. Combining a lack of ability with the skewed incentives embedded in the current social science industrial complex, you have the mess we see today. With the movement to be more inclusive and discount testing, this is only going to get worse.

    • I believe Lifestyle Medicine movement as spawned by Drs Dean Ornish, Neal Barnard, Caldwell Esselstyn, John McDougall, and others have the advantage of interacting with their patients. They may revolutionize some dimensions of health care.

    • yyw:

      I believe the most important skill for good science resides in:

      (causal/measurement/critical reasoning) –> (statistical/mathematical reasoning and computation).

      In my estimation a substantial barrier to real progress is the belief that the statistical/mathematical will solve the problem of the causal/measurement/critical.

      • Indeed. But you made a mistake: “–>” should be “>” (i.e., greater than).

        Nowadays, it seems that “<" is used without much of a justification. I am not necessarily talking about social sciences (with an exception of economics, maybe).

  7. The real worry here is that people act on this type of research. I have very rarely done novel or new research but I have often used research results to help make policy decisions.

    I’m not sure that scientists understand that results in a scientific publication may mean that millions or billions of dollars are committed for what is hopefully a good result.

    Bad research results can kill a hell of a lot of people.

    • “I’m not sure that scientists understand that results in a scientific publication may mean that millions or billions of dollars are committed ”

      Oh, they do though. But despite the fact that many are scientists, their beliefs are frequently the driving factor.

      I have read ***so*** many papers that lay out a perfectly good or even very strong case *against* some hypothesis, then turn right around in the conclusions and claim that their data *supports* that hypothesis. It’s bizarre. It’s like they don’t even understand what they themselves wrote. Or like some reviewer told them to conclude otherwise.

      If anyone ever read the old NPR blog 13.7, Stuart Kaufmann did this a lot. He’s a brilliant guy. He’d make a brilliant argument, then do a 180° about face and conclude the exact opposite – conforming to his beliefs – of what his argument supported. Amazing.

  8. Andrew, thanks alot for this great post! I fully underline your views. See below for another example.


    I am working together with other to retract a fraudulent study on the breeding biology of the Basra Reed Warbler. This study is fraudulent because the raw research data do not exist. See https://osf.io/5pnk7/ for backgrounds (or ask G). A manuscript about this topic was submitted on 9 October 2019 to Learned Publishing. The correspondence with Learned Publishing about this manuscript was finished on 4 March 2020. An article about the experiences of the Editor-in-Chief with the processing of my manuscript was published on 29 April 2019 at https://ese.arphahub.com/article/52201/ (Smart 2020). I was not informed (also not lateron).

    Copy/pasted from an unpublished comment, a draft, on Smart (2020):

    “Abstract: Communication between authors and editors about submitted manuscript is strictly confidential. This confidentiality was breached in a recent peer-reviewed article in the Scopus-indexed journal European Science Editing (Smart 2020). The article contains no motives to breach this confidentiality, careful reading revealed other ethical issues. They include undisclosed conflicts of interests, an unnoticed change in the Version of Record and shortcomings in regard to informed consent. Requests for access to data to support statements in Smart (2020) remained unfulfilled. The article contains no views from the author of the submitted manuscript. The findings indicate editorial shortcomings and emphasize the need for an improvement in the communication between editors of peer-reviewed journals and readers.”

    (…)
    “The EiC was contacted for the first time on 10 July. Two out-of-office auto-replies are until now the only responses. Correspondence about a draft with all members of the editorial board of ESE only yielded a defunct e-mail account of one of them. The correspondence with the EiC and with others at ESE included requests for access to data. The data were not received.”

    (…)
    “An earlier version was submitted to ESE [the journal which published Smart 2020] on 2 August 2020. A desk-rejection was received on 22 August. Tom Lang, Associate Editor, wrote in his rejection letter: “we cannot independently verify that you were the subject of her article, so we have no way of evaluating these claims.” On 24 August ESE was sent a request for: “an anonymous version of the full set of the raw research data of Smart (2020) to substantiate the claim that I was not the topic of Smart (2020).” There is until now no response, also not on reminders (through other channels). Responses on 15 and 17 September from publisher Pensoft reveal that this request was received in good order. Correspondence from the President of WAME, from Pensoft and from the University of Rijeka, does not list evidence that I am not the subject of Smart (2020). So the journal ESE, author PS, and others, including the President of WAME, publisher Pensoft and the University of Rijeka, have until now been unable to substantiate the claim that Smart (2020) refers to another individual. There is thus no evidence that I am not the subject of Smart (2020).”

    (…)
    “PS does not explain the need of using personal judgements in stead of providing readers of her article a thorough insight in the scientific shortcomings of the Basra Reed-Warbler manuscript. This is remarkable, because PS states in her article that she was during the processing of my manuscript in close contact with Wiley, the publisher of LP. PS also has a side-job as researcher at Wiley. This publisher has a huge amount of high-quality journals within the field of research of the Basra Reed-Warbler study. It is thus easy for PS to ask editors or reviewers of these Wiley journals for views about the scientific merits of the manuscript. It is therefore difficult to interpret the meaning of the sentence: “The root of the problem with this author was that our decision was based on the submitted article and not the subject of the article”.”

    (….)
    “The online ‘submission preparation checklist’ of ESE reveals that it was mandatory for PS to upload during the submission of Smart (2020) a signed and dated ‘EASE Ethics Checklist for Authors’. This checklist contains several obligatory declarations which PS needs to tick. One of these obligatory declarations is: “Results of this study have been interpreted objectively. Any findings that run contrary to our point of view are discussed in the MS.” PS has not reported in Smart (2020) that I was not contacted for my point of view. It is thus difficult for readers to judge if the findings in Smart (2020) are interpreted objectively.”

    ——————-
    See https://www.wame.org/wame-executive-board-and-committees for backgrounds about WAME and see also https://publons.com/publon/31661562/ (until now no response on various queries to get access to the raw research data of Smart 2020). The Academic Editor of Smart (2020) published on the same day, 29 April 2020, together with Smart. another article in this journal: “ESE and EASE call for high standards of research and editing”, see https://ese.arphahub.com/article/53230/

    Comments are highly appreciated.

  9. I know a young person with a Phd in a sub-field of biology who left the field and is no longer doing any work in science due to finding that experimental data for a paper in Science (the journal) had been faked, reporting it, and having his evidence be ignored in a brazen way. (You can add Emory University to your list of places where such things happen.)

  10. And yet, if you suggested, as a sensible consequence, a fairly draconian reduction in government funding for academia there would be an outcry against this “anti-science” movement.

    • I was going to leave a similar comment.

      It’s fairly simple really: all this stuff is seen as an annoyance or in the way because it’s become a game at some level. When academics is so heavily incentivized by profit for universities and research centers via indirect funds, and fame or reputation, and so forth, of course that’s what’s going to happen. People engage in questionable research practices because it pays off in terms of money and attention.

      The cargo cult metaphor has sort of outworn its utility I think because most of the rote “red tape compliance,” following the letter rather than the substance of the law etc. isn’t due to lack of understanding, it’s due to willful negligence or maliciousness, often incentivized by administration and other power figures.

      Punishing egregious wrongdoing isn’t going to solve the problem, as most of the problems fall into this gray area, and the punishment would dwarf the payoffs elsewhere. You’ve got to change the entire incentive structure, which isn’t a statistical or publication issue.

      Whenever people start talking seriously about doing this, like legislatures proposing cutting or bringing the spotlight to indirect funds, it’s as you say, people scream anti-science. If anything it’s the opposite.

  11. (1.) For most people in academia, and for most individuals aspiring to become a part of academia, perhaps the current incentives do not permit ‘’caring about the process of science’’? Perhaps only the most exceptionally gifted individuals who have made profound and widely acknowledged scientific contributions, and who accordingly do not have to worry about employment or funding, can safely afford to care about scientific ideals? But if you are merely an ordinary PhD student or assistant professor, perhaps adhering to sound scientific ideals will just inevitably lead to unemployment? Maybe individuals who care too much about science do not even get admitted to a PhD program (a minimum requirement is usually to e.g. chase particular pieces of noise for you master’s thesis and tell a convincing story based on the results). Maybe conformists and/or careerists, with usually very limited interest in the scientific process, are the most popular candidates as RAs and PhD students? Perhaps many researchers have ”successful” careers partly because they are individuals who prefer making confident claims based on scant empirical evidence over adequately admitting uncertainty or signaling doubt? And maybe the situation is like this at most universities, and also in a lot of the natural sciences?

    (2.) On a different note, I would love to hear Andrew’s take on this one:

    https://www.econjobrumors.com/topic/harvard-ap-quietly-drops-reference-in-top-5

  12. The only way out of the trap is to start rewarding people for shooting down studies. Interest your local billionaire to fund university chairs/journals of disconfirmation. But short of making sure *someone’s* career is substantially advanced by shooting down the claims I just don’t see how you avoid the amplificatory effect of these concerns, the desire to be liked and respected by colleagues (few people like you if you hurt their career/project) and the ideological pressure to have the right views in many areas.

    But the second you have career incentives to do so you blunt the last two, “hey nothing personal, I just need more publications” and you have real pushback against bad science. Mere peer review isn’t, and never was, capable of dealing with this issue and you gotta start somewhere.

  13. Maybe I’ve not lurked long enough on this site, or I just don’t remember, but was there already a post describing the scientist who ignores data and methodology as the academic equivalent of the “Cowboy cop” (https://tvtropes.org/pmwiki/pmwiki.php/Main/CowboyCop)? That would make the scientists insisting on both the “By-the-Book cops” (https://tvtropes.org/pmwiki/pmwiki.php/Main/ByTheBookCop). That certainly fits with the scientist-as-a-hero narrative.

  14. As a criminologist (or atleast someone with an MA in criminology), I lean towards thinking that the majority of what goes on in the discipline is towards the very bottom end of the spectrum of bad scientific behaviour (weak statistical analysis) although the events Pickett describes seem to be at the top end.

    From my experience learning/teaching Criminology at the University of Auckland, this is the process that many students go through:
    Undergrad: No requirement to do any stats papers (not even stats 101)
    Post-grad: the paper on methods has 1-2 weeks total on quantitative analysis (the year I took it, one of the two weeks was skipped)
    Thesis: most do qualitative or theoretical work.

    Many of the criminologists then have PhD’s with almost no knowledge of stats. Most of the research they publish will be about peoples experiences of the criminal justice system. This work obviously has value but then when it comes to the questions of “what works” and “which policies should we implement” their skills are woefully lacking. I’ve seen several professsors (lecturers in NZ terms) claim something like there was a 10% decrease in X crime after Y occurred with seemingly no awareness that the fact that there was only 40 of those crimes in year 1 and 36 in year 2 means that it’s probably just noise. Many of the lecturers I’ve worked with would probably think that it was a good sign that Y was reducing crime.

  15. Good article. I’m sure examples abound, but if you wanted another, check out the ongoing so-called PACE-gate scandal. A group of psych researchers run a clinical trial (that goes on to inform public health policy) and wouldn’t you know, their preferred intervention comes out on top. When people start digging, the researchers get cagey and fight like hell to keep their data hidden and dodge any and all criticism.

    https://doi.org/10.1177/1359105316675213

Leave a Reply to Jim Cancel reply

Your email address will not be published. Required fields are marked *