“The writer who confesses that he is ‘not good at attention to detail’ is like a pianist who admits to being tone deaf”

Edward Winter wrote:

It is extraordinary how the unschooled manage to reduce complex issues to facile certainties. The writer who confesses that he is ‘not good at attention to detail’ (see page 17 of the November 1990 CHESS for that stark, though redundant, admission by the Weekend Wordspinner) is like a pianist who admits to being tone deaf. Broad sweeps are valueless. Unless an author has explored his terrain thoroughly, how will he be reasonably sure that his central thesis cannot be overturned? Facts count. Tentative theorizing may have a minor role once research paths have been exhausted but, as a general principle, rumour and guesswork, those tawdry journalistic mainstays, have no place in historical writing of any kind. . . .

He’s talking about chess, but the principle applies more generally.

What’s interesting to me is how many people—including scientists and even mathematicians (sorry, Chrissy) don’t think that way.

We’ve discussed various examples over the years where scientists write things in published papers that are obviously false, reporting results that could not possibly have been in their data, which we know either from simple concordances (i.e., the numbers don’t add up) or because the results report information that was never actually gathered in the study in question.

How do people do this? How can they possibly think this is a good idea?

Here are the explanations that have been proffered for this behavior, of publishing claims that are contradicted by, or have zero support from, their data:

1. Simple careerism, or what’s called “incentives”: Make big claims and you can get published in PNAS, get a prestigious job, fame, fortune, etc.

2. The distinction between truth and evidence: Researchers think their hypotheses are true, so they get sloppy on the evidence. To them, it doesn’t really matter if their data support their theory because they believe the theory in any case—and the theory is vague enough to support just about any pattern in data.

And, sure, that explains a lot, but it doesn’t explain some of the examples that Winter has given (for example, a chess book staing that a game occurred 10 years after one of the players had died; although, to be fair, that player was said to be the loser of said game). Or, in the realm of scientific research, the papers of Brian Wansink which contained numbers that were not consistent with any possible data.

One might ask: Why get such details wrong? Why not either look up the date, or, if you don’t want to bother, why give a date at all?

This leads us to a third explanation for error:

3. If an author makes zillions of statements, and truth just doesn’t matter for any particular one of the statements, then you’ll get errors.

I think this happens a lot. All of us make errors, and most of us understand that errors are inevitable. But there seems to be a divide between two sorts of people: (a) Edward Winter, and I expect most of the people who read this blog, who feel personally responsible for our errors and try to check as much as possible and correct what mistakes arise, and (b) Brian Wansink, David Brooks and, it seems, lots of other writers, who are more interested in the flow, and who don’t want be slowed down by fact checking.

Winter argues that if you get the details wrong, or if you don’t care about the details, you can get the big things wrong too. And I’m inclined to agree. But maybe we’re wrong. Maybe it’s better to just plow on ahead, mistakes be damned, always on to the next project. I dunno.

P.S. Regarding Winter’s quote above, I bet it is possible to be a good pianist even if tone-deaf, if you can really bang it out and you have a good sense of rhythm. Just as you can be a good basketball player even if you’re really short. But it’s a handicap, that’s for sure.

13 thoughts on ““The writer who confesses that he is ‘not good at attention to detail’ is like a pianist who admits to being tone deaf”

  1. “lots of other writers, who are more interested in the flow, and who don’t want be slowed down by fact checking”

    Bluto: Was it over when the Germans bombed Pearl Harbor?
    Otter: Germans?
    Boon: Forget it, he’s rolling.

  2. If there was incentive to publish negative results I would hope that *some* of this behaviour would go away. I think that NHST really makes some researchers feel trapped if they don’t get the outcome they were hoping for.

    • Good point — assuming I understand correctly what you mean by “negative results” (What I assume it means is “results that don’t support the theory that had been proposed.”)

  3. In my opinion, this problem is the natural consequence of the consumerism ideology that has invaded scientific research.
    People are celebrated for having thousands of papers (we see departments pushing for this type of values). They are rarely celebrated for an individual breakthrough.
    So what’s the message we get slapped on our face every day. Quantity makes you survive, quality not. And unfortunately, they are anticorrelated because, as you said, for doing good science, you have to spend time thinking and then testing your ideas. And checking that everything is done properly.
    This just goes against the current ideology in science. Everyone cries for better science, but then no one wants to pay people for producing few great results (that may never come too). No one wants to invest and risk the capital in brains trying to answer questions. They want a safe investment, I put my money but I want something back asap. And it must be something I can use to make our department name resonate so I can attract other money.

    So it’s easy to spot the issue, but no one wants really to address it. They have the money and don’t want you to use it for your pleasure thinking. They hire you to produce materialistic scientific objects (we call them papers).

    • “People are celebrated for having thousands of papers (we see departments pushing for this type of values). They are rarely celebrated for an individual breakthrough.”

      I think the extent to which this is true varies from field to field, and institution to institution (and maybe department to department).

      • Unfortunately, as you go towards the top ranked places, these numbers become more important.
        Obviously the thing varies based on the field and place. But don’t tell me that the general trend is not for quantity more than quality. And impact is measured with estimates but not with real effect on research.

  4. I tend to disagree with Winter on a couple of points.

    First, as you note, there’s nothing wrong with a pianist admitting to being tone deaf! What a strange example, considering there are famous pianists who were -literally- deaf. But even aside from those famous cases, I’ve known a lot of extremely successful pianists who would freely admit to not having a great ear for pitch. Now, an actually fully tone deaf person is unlikely to BE a successful pianist of course, but if someone is good at performing music in a way that listeners enjoy, it doesn’t matter how good their ear is.

    But on his more central point as well — “Tentative theorizing may have a minor role once research paths have been exhausted” — I think in the history of human discoveries, most “breakthroughs” come from people who started with some extreme “tentative theorizing,” with the details worked out later (sometime but not always prior to publication, and sometimes but not always by the same person).

    The problem, as I see it, is that most researchers, writers, academics and ilk are unrealistic about their ability to provide great ideas. Everyone thinks they have something brilliant to reveal, whether through explicit theorizing or shoddy research in support of their implicit theorizing. Thus the market is flooded with the work of would-be geniuses. For the vast majority of people, doing some rigorous but much more limited analysis would be far more beneficial to their field, but few are willing to accept that.

    And indeed, for some class of individuals, their chances of making a major contribution may be greater by producing a lot of half-verified ideas and getting lucky than by doing rigorous work with lower variance and fewer repetitions.

    • Great comment.

      Incidentally, music, including perception of tone and pitch, can be learned. As you suggested, a pianist doesn’t need that much sense for tone or pitch anyway. S/he just needs a well tuned instrument. Theory provides a fine framework for understanding and writing music. All people who play music though need excellent timing and manual dexterity.

  5. “For the vast majority of people, doing some rigorous but much more limited analysis would be far more beneficial to their field, but few are willing to accept that.”

    In my field of origin (math), the general ethos of the field does seem to be rigorous and usually limited (and not earth-shaking) work. In particular, the distinction between “conjecture” (or “claim”) and “proof” is standard, and often inculcated by the upper division undergraduate level. For example, we have the phrase “hand-waving” for something that is lacking the detail needed to be considered rigorous.

    • I’m envious of mathematicians. Though it’s a field that quickly exposes ones lack of genius, and I don’t think that’s what most people want — or at the very least it quickly filters people out. I know personally I was pretty good at it, but it was also clear that if I went into it I wouldn’t be great.

      Anyway, seems like a very different world from even hard sciences much less medium or soft ones. I’d still be interested in like a comparative profile of the “breakthrough” mathematicians historically. Were they extremely rigorous and cautious but were just so good at it that they made great boundary-pushing discoveries anyway? Or were they more eccentric, risk-taking, or “hand-waving” types? And were their greatest achievements the product of tons of incremental improvements, or were they giant leaps which had their foundations built up afterwards?

      • I don’t recall hearing the phrase ““breakthrough” mathematicians” except in connection with the “Breakthrough Prize in Mathematics” started in 2015. Looking at the web page on that (https://en.wikipedia.org/wiki/Breakthrough_Prize_in_Mathematics), I find a lot of words that I don’t usually associate with mathematics, like “revolutionary”, “breakthrough”, “spectacular”, “transformative”, “transformational”.

        My own experience as part of the mathematical community is that these words are not typical; there is more of the “shoulders of giants,” “shoulders of dwarfs” attitude — although there is often an attitude of awe at someone like Gauss who did so many things showing so much insight.

        In general, I’d say that mathematics is to a great extent more a collective endeavor than one of breakthroughs. I don’t mean that mathematicians typically work in “collectives,” but that it often takes the contributions of more than one mathematician to make progress. For example, there are some mathematicians who have great insight and make conjectures that later prove to be correct — but more often than not the proof is not by the proposer. It might be by another individual, or by a group of individuals working together, or by the serial efforts of several individuals who work mostly individually.

  6. Let’s try “tone-deaf oboist.”

    Quarter tone: two oboes playing in unison.

    Perfect pitch: When you toss the banjo into the dumpster and it lands right on the accordian.

Leave a Reply

Your email address will not be published. Required fields are marked *