Battle of the open-science asymmetries

1. Various tenured legacy-science yahoos say: “Any idiot can write a critique; it takes work to do original research.” That’s my paraphrase of various concerns that the replication movement makes it too easy for critics to get cheap publications.

2. Rex Douglass says: “It is an order of magnitude less effort to spam poorly constructed hypotheticals than it is to deconstruct them.” That’s one of the problems that arise when we deal with junk science published in PNAS etc. Any well-connected fool can run with some data, make dramatic claims, and get published in PNAS, get featured in NPR, etc. But it can take a lot of work to untangle exactly what went wrong. Which is one reason we have to move beyond the default presumption that published and publicized claims are correct.

It’s interesting that these two perspectives are the exact opposite? We have two models of the world:

1. Original researchers do all the hard work, and “open science” critics such as myself are lazy parasites; or

2. People who write and publish junk science papers are lazy parasites on the body of healthy science, and open science critics have to do the hard work to figure out exactly what went wrong in each case.

Which model is correct?

I guess both models are correct, at different times. It depends on the quality of the science and the quality of the criticism. Also, some junk science takes a lot of work. Brian Wansink wasn’t just sitting on his ass! He was writing press releases and doing TV interviews all the time: that’s not easy.

In any case, it’s interesting that people on both sides of this divide each think that they’re doing the hard work and that the people on the other side are freeloaders.

18 thoughts on “Battle of the open-science asymmetries

  1. Having done some of each, I’d say they’re hard in different ways. In a lab, there are countless logistical nightmares, helium leaks, bad solder joints,…. But in writing a critique you know that a big fight is coming, sometimes with lawyers attacking you, and you can’t afford even a little slip.

    Reminds of a joke from my dad (an experimenter)

    A theoretical paper is believed by no-one except its author.
    An experimental paper is believed by everyone except its author.

  2. I am working through Oscar Wilde’s “The Critic as Artist”. He has a lot to say about this question. It is hardly a new one, and one that needs to be resolved, yet again, every time someone’s feathers get ruffled. Wilde comes down on the side the critic is doing most of the work creating an environment in which ideas can develop and grow, and creating a community in which art can flourish.

  3. I’d skip both models, because both are trying to solve it via identity politics: Identifying one group as good and one as bad. That’s inherently BS, and there is nothing to salvage from the wreckage.

  4. To your point, anecdotally, the longer my invited review of a wanna-be journal article is the more likely it is to get rejected, deservedly. Of course, I’m usually one of three reviewers, sometimes more, so it’s not just me.

    I also learned to avoid my students’ proofs of something like “computability” – so I’d applaud their efforts and enoucourage them to try to persuade their peers, instead. Finding a flaw in a proof can be very, very difficult.

  5. I don’t think it’s hard to deconstruct poor science.

    Usually the problems are evident in the first few paragraphs, often even in the abstract. The problem is that many people are completely oblivious to the idea that the technical analysis is dependent on the accuracy of the assumptions. They want to argue the technical minutia instead of the broader concepts because technical minutia has a cookbook formula to fall back on: “I looked up a recipe to bake a cake. I put in flour and sugar. Therefore I must have baked a cake.”

    People need to shift the argument back to fundamental concepts and away from the technical details. That’s the solid ground.

    • I think it is _sometimes_ easy to deconstruct poor science, but there are plenty of examples — including many on this blog — in which people have put in many hours of work investigating a result, only to find that it was wrong due to basic statistical mistakes or data errors. If you look back through the ‘zombies’ category you will find some examples, although most such posts are in other categories and you’ll have to sift through in order to find them.

      • OK, I agree with that. I exaggerated. Really there are a lot of situations you describe. And there are variations on that theme: research papers where you have to dig deep to find statistical/mathematical errors; but also books like the Mathew Walker book where there are just so many claims that following all the references is a lot of work.

        But also I think there’s more crummy science that’s easy to deconstruct than you allow. Sometimes invalid assumptions just become popular and get widely used. Harping back on my old favorite, numerous ridiculous claims have been made about human nature based on silly tests with a few tens of undergrads (power pose, for example, but many many papers in economics).

      • The other thread reminded me of another incorrect assumption used widely in social sciences:

        Education research often – almost always – implicitly assumes that students are immutable objects operated on by the outside world. They are born with a learning machine inside them that never changes, and it’s the object of education research to figure out how that machine works.

        Actually off the top of my head I can think of several more dubious assumptions that have been used in education research, but I don’t know how widespread they really are.

  6. Here is one data point: I’ve been an expert witness often and it is much easier to write rebuttal than to write opening testimony. I think that does say something about the relative easy of doing the original work or critiquing it. There is one difference, however. Testimony takes place in a format where cross examination is expected and needs to be planned for. Publication does not necessarily follow this format – it is often difficult to get a critique acknowledged, let alone responded to. I think that shifts the balance somewhat.

  7. A different perspective combines both of these: a person who publishes original research, and then follows up with a replication study of the same question or hypothesis or idea. I’m currently doing this: some of the original results did not replicate, but others did. It has been a tremendous amount of work to figure out why in both cases. I’m looking forward to reviewer comments on the replication:

    Will they see me as a lazy parasite who accepted incorrect conclusions in the original research AND as a lazy parasite who is trying to get a cheap second publication out of the same original idea?

    Or will they see me as a hard-working original researcher who also bravely (or stupidly) committed a lot of time and effort to figure out what went wrong (and what went right) via a replication?

    Obviously I hope for the latter but it will be interesting to find out. In my area of research replication studies are still very rare so I don’t have much to base a prediction on. Maybe other commenters here have a prediction?

  8. How about an alternative formulation that I think makes the distinction less contentious: is it more important to identify puzzling things in the world or to explain them? Here’s an example – overall mortality in America increases in economic boom times and decreases in recessions. Obviously it is important to know that fact and a real contribution to science notice it, even if whatever ideas you throw out for why it is happening are misleading (e.g. mortality rates are driven by old people, and no amount of changes commuting patterns will sufficiently move the demographic needle to explain it). But it is also important to figure out the mechanisms behind such a relationship – why does that happen?

    Original research and criticism work similarly. The original research itself becomes a puzzle (here is something we estimated in the world) and that has value (and more value the more consistent that relationship turns out to be in the world). But unpacking the reasons for what generated that result, be that something real in the world or a statistical oddity, is also clearly important science.

    In general, I’m with Andrew’s overall point: It depends on the quality of the science and the quality of the criticism. Good, important science latches on to broad empirical regularities. Good, important criticism reveals to us how those regularities came into being (even if the answer is “statistical mirage” or “terrible data” or “improbable sample”).

  9. I’d draw a distinction between a critique and a replication–a critique can be low-effort, but a replication is a “walk four miles in the other person’s shoes” kind of thing. Even if you’re “merely” re-doing an analysis based on someone else’s data.
    So doing that work, and then maybe figuring out and explaining why the results differ is a worthwhile contribution to science.

    And isn’t that what matters?
    The difference between working on a production line and doing science is that it’s not the amount of work you do that’s important, it’s the quality: we’re not judging science by the effort that went into it, but by the insight it produced and the impact it had on the field. Failed replications or discovered errors in high-profile studies impact the science, and therefore should be published.

Leave a Reply to jim Cancel reply

Your email address will not be published. Required fields are marked *