Trash talkin’

Someone sent me an email:

I thought you will get a kick out of these figures from a new paper [appearing in a top economics journal].

I replied: Wow—this is like a parody of econ. But, then again, econ is a parody of econ, isn’t it?

[To clarify here, by “econ,” I don’t mean the entire field of economics, which is useful in so many ways. Rather, I’m referring to grabby papers in “top 5 journals” that attempt to explain vast areas of human experience with some combination of trivial math, regression analysis, and p less than 0.05.]

Anyway, my correspondent responded:

The only thing missing from that paper is a formal model.

To which I replied: the only thing missing is a proud declaration of how they are taboo-busting rogues allying themselves with Joe Sixpack in the eternal battle against the cultural elite of sociologists who pretty much run the world.

I’d link to the original article but what’s the point.

P.S. Econ is econ and is special in its own way, but Sturgeon’s law applies universally. Most published statistics articles are completely irrelevant to the world, even to whatever application area they are nominally targeting. Bad statistics articles are irritating in a different way than bad econ articles, which in turn are a different sort of irritating than bad poli sci or sociology articles. It’s an interesting thought: we tend to compare different fields based on the different characteristics of their best work, but another dimension is to compare the different characteristics of crappy but well-respected work in each field.

16 thoughts on “Trash talkin’

  1. “crappy but well-respected”

    It seems to me that both of these conditions are most often met when researchers reach beyond their field of expertise for additional validation. Their peers within their field are not used to evaluating outside information so they tend to be unduly impressed. Do peer reviewers ever respond to a paper within their field with “sorry, this paper is so far beyond my expertise that I am unable to understand it?” That would be admitting ignorance. Instead, they write “clever use of a novel method.”

    As long as the Jesus’ Wife author was debating whether a word on the papyrus was a penurious jussive or a profligate jussive, how could we say whether her opinion was crappy? But when she started putting her own spin on the cold hard facts that emerged from the forensic analysis, things got crappy really fast.

  2. Mathijs, Clyde, Humbug:

    The point of the post is to convey some of what it feels like to see this sort of thing, over and over again, for years and years. I’m not trying to draw any conclusions or to convince anyone of anything; I’m just sharing what it feels like.

    I’ve written dozens of posts discussing particular articles, and I think that has value too, but discussion of these posts typically turns to specifics of the articles in question. This time I wanted to move away from that.

  3. I’m about 95% sure this is it, in large parts based on the quality of the graphs (in rot 13): Gvgyr: Sbyxyber. Wbheany: Dhnegreyl Wbheany bs Rpbabzvpf

    One of the editors of that journal is known to have said that he sometimes accepts articles that he thinks will get a ton of citations, even if he thinks the article isn’t very good. Introducing a dataset that lots of people will use is going to do that.

  4. Why doesn’t the field of statistics and the ASA call out such misuses of its methods? This blog helps, perhaps. I get it. And people write articles about misuses. But, how long is this going to persist? In higher education, especially at institutions that aren’t flush with cash reserves, you have competition across disciplines for resources quite possibly like never before. Are disciplines that disproportionately churn out misuses more so than others supposed to keep getting research support? Faculty get release time to generate garbage “research”. The institution has to hire someone to cover their classes. If not release time then time it as part of their workload and that too comes with a cost. Those are funds that, in the aggregate, could and should be put toward disciplines, or individuals within disciplines, with research with disproportionately more credible standing, no? What is the return on investment for continuing to support flawed research that isn’t seen as flawed because it appears in a peer-reviewed journal?

  5. Andrew,

    Not directly related to the post, but still many people I’ve talked to are interested in the following issue:

    Do you think a journal is “obliged” to publish corrigenda for papers it has published?

    I have submitted to ECMA a corrigendum to its well-cited theory paper. Importantly, the mistake subsequently spread into follow-up papers. ECMA rejected: “Not enough contribution, try lower-ranked journals”

    This seems strange – to allow your paper to mislead the profession. But maybe I am missing something.

    Thank you
    DM

  6. While I can figure out bad economics articles, I am not sure what is a bad statistics article published in, say, JASA, Annals or JRSSb…I am not a statistician (but I have published a couple of papers in one of these journals), I am just curious of your opinion…

  7. Hilarious. I just looked at recent QJE articles and found one about identity that was making large claims in typical “econ pontificating” QJE style. I opened it and wow, those are bad graphs. But then there was a formal model, so I guess this wasn’t the article. Which just confirms Andrew’s point about not mentioning the paper. There is so much garbage. What is even the point?

  8. if this is the folklore paper in the qje, the graph under question is probably the binned scatter plot with a quadratic fit in them (i.e., figure 3, panels B and C). there are previous posts on this blog where andrew interpreted binned scatter plots as scatter plots with a very small number of points, so i suspect this is yet another example of the same

Leave a Reply

Your email address will not be published. Required fields are marked *