Skip to content

The greatest impediment to research progress is not impediments to research progress, it is scientists reading about impediments to research progress

My short answer is that I think twitter is destructive of clear communication.

Now I’ll give the question, and I’ll give my long answer.

Here’s the question provided by a reader:

Just wondering what you thought of Brian Nosek’s recent comment on twitter,

“The biggest impediment to research progress is not fraud, it is all scientists reading about fraud.”

I [my correspondent] would probably argue that the biggest impediment has more to do with scientists caring more about publication than they do about having a better understanding of the world. I think this manifests itself in a lack of interest in properly understanding the statistical tools they use, and then going ahead and misusing them. But perhaps there is some delusion here too… that really smart scientists really believe their theories explain the world and that communicating the evidence for their theory is merely a pro forma action to satisfy some annoying requirement for publication.

If this Nosek twitter post results in a post on your blog, I’d appreciate being anonymous.

My reply: First off, I don’t think that Nosek has the sort of scientific arrogance where he believes his theories “explain the world.” Recall that Nosek is an author of the wonderful “50 shades of gray” article. So I think he’s pretty aware of uncertainty and the need for open criticism, pre- and post-publication, and I think his comment is really more along the lines of:

Sure, fraud is bad, but don’t kid yourself that if fraud disappeared, that science would be healthy. Reading and talking about fraud may be fun, but it can distract us from more serious problems with the process of science research and communication.

That’s not actually what Nosek said, but it’s what I suspect he was actually saying.

To step back a bit, I think the greatest impediment to interpreting Nosek’s remark is . . . Twitter. I’m really down on the idea of trying to communicate in 140 characters or less, and I much much much prefer a longer format where there’s space to explain your assertions. Recall our discussion a few months ago with Richard Tol. It’s easy for him to be a tough guy on Twitter, but snappy comebacks aren’t enough in a more freeform discussion.

OK, I don’t really know what Nosek meant by his tweet. But, just to continue the discussion a bit, yes, I agree with my correspondent that the incentives involved in publication are a problem, and I think these problematic incentives have increased in recent years with the new norm of the 3-to-6-page “tabloid-style” article which motivates the maximization of hype—but, but, but . . . it’s more than that. I think there’s a larger problem in that researchers expect the impossible: they expect novel discovery as a matter of routine, just turn the crank and learn some general truths about human nature. It would be great to have an improved publication process, but, beyond that, I think we have to get beyond the current ideal of social science research, which seems to be the following: A short paper with a result that is (a) surprising, counterintuitive, stunning, etc, (b) rigorous, causally identified, statistically significant, etc, and (c) predicted by an existing theory. For obvious reasons, it’s not so hard to observe (a) and (c) at the same place at the same time, and so it’s natural to cut corners on (b). But (a) and (c) are pretty flexible too.

Umm, given recent news, I guess I should add: (d) based on real data.


  1. Brian Nosek says:

    Hey Andrew. Actually it is a joke about the loss of research productivity as a function of the collective obsession with the LaCour affair. Brian

  2. artkqtarks says:

    The other day, I came across a comment by Michael Eisen to a blog post by Lior Pachter that seems relevant. To give you a context, the blog post in question was motivated by Lior Pachter’s criticism of a Nature paper written by Manolis Kellis, Bruce Birren, and Eric Lander published in 2004. Manolis Kellis left comments defending the paper. Eric Lander (co-chair of President’s Council of Advisors on Science and Technology!) also left a comment.

    Michael Eisen writes:
    “However, I find the responses from Manolis and Eric to be entirely lacking. Instead of really engaging with the comments people have made, they have been almost entirely defensive…”

    “To me, their responses are more evidence that science in some quarters of our community has ceased to be a pursuit of new knowledge, and has instead become nothing more than a competition for points on the battleground of high-impact publication. If you are pursuing the truth, defending decade old statements that are clearly wrong makes no sense. It only makes sense if you view this kind of commentary as a threat not just to some long-buried asterisk on your CV, but to an approach to doing science that routinely produces this kind of “result”.”

    • Martha says:

      Thanks for the link to the Pachter blog. Interesting, in a somewhat depressing way (depressing in that Pachter is so focused on p-values). But this comment by jchoigt seems to hit the nail on the head:

      “I see here an issue of fundamental scientific reasoning. Kelliis and Lander evidently conducted an analysis that disproved one hypothesis (that both copies of a duplicated gene will have similar rates of accelerated evolution) and then claimed the data supported another model (Ohno). I refer to Chamberlin’s 1890 paper on multiple working hypotheses and Platt’s 1964 paper on strong inference – science can properly disprove hypotheses, but it’s difficult to support any particular hypothesis unless you disprove all reasonable alternative hypotheses. So not having disproved the null model of independent evolution of duplicated gene pairs, the Kellis and Lander analysis cannot support any particular model.”

      My personal opinion is that the whole idea of trying to show that one single “mechanistic” process explains a phenomenon in evolutionary biology is silly: there are so many ways evolution can work, because there are so many junctures at which what happens is stochastic, dependent on factors such as the details of what happens when a single cell divides, the huge number of cell divisions going on in any short time span (let alone a long one), and the multiple possibilities for interaction with the ambient ecosystem.

      • Pinko Punko says:

        Martha, the discussion on that Pachter post is quite dense but extremely interesting. The idea of the p value is to quantitatively assess what is taken by many to possibly be an empty statement. The idea that there might be a lack of rigor or depth behind the surface of the statement. The presumption is that under many scenarios the statement made would not be supported by a statistical analysis. There is also personal history there as Pachter has called out Kellis before on issues. However, there is always an interesting and well elaborated scientific case made in each instance though there be some blunt or strong words. The idea of understanding evolutionary processes through retention of duplicated genes is very interesting. Of course the models are broad and it is plausible that cases will be found for the major, simple models both, it is likely that there could be a dominant trend- this is what the studies try to discern.

  3. Wayne says:

    I think another, subtle factor is the cooption of science as a means of advancing a philosophy. The philosophical goals may be laudable, but the science will not be. It leads to a potent culture of confirmation bias and forked paths.

  4. John Mashey says:

    But Richard Tol tweets:

    “The difference between LaCour and Cook is the difference between Green and Lewandowsky.”

    Cook is a reference to this paper, and see Tol’s Rejected Comment.

    • Chris G says:

      This is getting off-topic but the “97%” statement is a pet peeve of mine. It’s argument by authority. And I hate that. 97% of experts might have their heads up their @$$es. (Asch Conformity Experiment, anyone?) What’s the reality check on that?

      What matters is that fundamental cause and effect is well-understood. (Two refs – and From the standpoint of enacting better public policy I imagine that it’s better that there’s 97% consensus re basic causal relationship but if the figure were only 9.7% or 0.97% that wouldn’t be cause to disbelieve the science or, given the science is what it is, to rethink what constitutes good public policy. Low consensus would simply be an indicator that there’s much more education to be done.

      • Andrew says:


        If the number were 9.7% or 0.97%, I’d start to wonder what was going on. I agree that the 97% number is not any kind of proof, but it does supply some information.

        • Chris G says:


          It does supply some information. My complaint with “consensus” is that it doesn’t distinguish between political position and technical understanding. It doesn’t address the basis of belief.

          From the standpoint of formulating policy, 97% agreement seems like it should be a positive but a high level of consensus as either necessary or sufficient. One should argue for policy based on its merits irregardless of whether it will be popular upon introduction. If it isn’t then the political challenge is to win sufficient support for proposed policy to get it enacted. (Towards that end, the 97% consensus among scientists has had no effect. We have no national climate change mitigation strategy and are not at risk of developing one any time in the near future.) For the purpose of achieving a political end (which, in the case of climate change, would also have a tangible physical impact) one can construct a narrative without reference to consensus – scientific or otherwise. Rather than use “consensus” to imply scientific understanding I’d prefer a more direct and stronger statement re the science itself.

          • Chris G says:

            Correction: “… high level of consensus as either necessary or sufficient.” should read “… high level of consensus is neither necessary nor sufficient.” (Anyone out there studying the effects of insomnia on expository writing?)

          • Eli Rabett says:

            Chris, without getting too deep into the specifics of this case how do you judge the merits of a policy? If you yourself are not expert in the area who do you depend on? If you wish this is Steve Schneider’s real double bind, that experts must be careful not to overemphasize caveats when there is a clear conclusion.

            • Chris G says:


              A few things:

              1) For the moment I’m going to pass on offer my thoughts on judging the merits of a policy. I think doing so would be biting off more than I’m prepared to chew.

              2) With respect to getting smart on a subject that I’m not familiar with, I read a lot. My experience (painting with a very broad brush here) is that there are far more “experts” than experts in a field. It would be fair to say that I don’t care about what people in general think; I care what people with informed opinions and insight think. How does one determine who has informed opinions and genuine insight? Read a lot. Some of what you read will be great and some will be crap. Try to apply what you think you understand. Form testable hypotheses, test them, and use the results to refine your understanding of the world. Identify people working in the field who’ve been doing that for a long time.

              3) My original comment was more about the significance of consensus in a developing a plan for political action. It’s nice to have but I don’t believe it’s either necessary or sufficient. With respect to climate and environmental policy, scientists may have unique technical insights which aren’t well-known or understood by the general public or even by other scientists. Lack of consensus doesn’t mean that they should hold back on advocacy if doing so is appropriate based on their knowledge. One should act on the insights they possess. One should also attempt to educate and be open to insights from others. (“When the facts change, I change my mind.”) You’ll screw up periodically if you act on incomplete information but that’s life. You’ll just make a different set of mistakes if you don’t act. Make the best decisions you can based on the information you have available to you. Make course corrections as appropriate.

              4) As far as climate policy goes, Pierrehumbert captured what should be our driving consideration, “The science is settled enough.” While we do not possess the capability to make high fidelity (deterministic) predictions we can put plausible bounds on the problem. Perhaps some things will break in our favor but the downside risks should be sufficient motivation to take action. I have little kids. The thought of them inheriting a world which is a warmer version of The Road is deeply disturbing to me. Do I think that’s a foregone conclusion? No. But I do think that it’s a distinct possibility. Good policy mitigates the risk of civilization ending badly n decades down the road. I’ve done my fair share of SW and LW radiative transfer calculations. (NB: RT calculations for work not as a hobby.) I have a reasonable understanding of atmospheric circulation and what we do and don’t know ocean-atmosphere coupling. It is difficult for me to conceive of a scenario where tossing additional blankets over the planet plays out well. Perhaps all will turn out fine but who in their right mind plays Russian roulette let alone keeps adding rounds in the cylinder? It’s nuts. Good policy will involve committing to cutting CO2 emissions ASAP so as to minimize the number of rounds in the cylinder.

              5) I agree that caveats should not be overemphasized.

              6) I had the opportunity to hear Bill McKibben speak yesterday. He’s a good speaker.

              • Eli Rabett says:


                You take a lot of time to build up expertise in the area. The problem is that most people don’t have the time or want to spend the time, so how can they develop a rubric that can rapidly deal with whatever the issue. An answer, the usual one is that use their political/religious viewpoint, identify others with the same viewpoint and follow them.

  5. Jack PQ says:

    More or less a propos: all the talk about peer review is about classic peer review vs. open peer review. But what we had a hybrid? Editors of a journal have a list of, say, 300 reviewers (some added, some removed over time). Any paper submitted can be reviewed by anyone among the 300. Editors look at all the reviews before making a decision. Maybe reviews could be compartmentalized, or they could be open to each other (pros and cons). Some papers will attract 10 reviews, others 1-2. If none, then the editor assigns a couple, or desk rejects.

    I don’t know if this approach would work but I’d like to see a journal experiment with it.

  6. Eli Rabett says:

    Let Eli defend twitter a bit. It is well known that good elevator speeches require more thought and creativity than hour long seminars. A good twit is a link to something TL:DR that actually gets people to read the longer post, either that or a one liner which causes people to think.

    That being said, these are few and far between and more common and pernicious are tweets that are built on false premises.

    • Pinko Punko says:

      I have found value in reading peoples tweets as of late, but the posturing and playing to various crowds (which we have already seen in the first several waves of internet communication) leaves much to be desired. Certainly Icharday Oltay type individuals are entirely devoid of light and much more methane-derived heat.

      • Andrew says:

        Hey, Tol did a drive-by to this blog the other day, leaving an incoherent comment accusing me of “bitching” about something. I think all this tweeting has dulled his edge.

      • Anonymous says:

        What I find kind of frustrating about tweets is that a successful one plays to two audiences.

        For those with deeper working knowledge of the subject, it captures something that’s not-obvious and true. For those that aren’t as deep, they misinterpret in a way that resonates with their prior beliefs.

        I think some of the ‘garden of forking paths’ discussion has gone down this route. For the ignorant, it reinforces their beliefs about the importance of using multiple comparison adjustments and preregistration. But I don’t think that’s the point that andrew intends, but it has this dual resonance characteristic to it.

Leave a Reply