18 thoughts on ““Why Some Important Findings Remain Uncited”

  1. The first post is quite interesting – but I don’t understand the conclusion. The 2 potential reasons offered for lack of citations are (1) that the results run counter to the researchers’ priors, and (2) that people are simply unaware of the research. Isn’t (2) the supposed reason for the paper? We want to have insight as to why people might not be aware of the research. If it is (1) that is an interesting finding. If it is (2) for other (?) reasons, then what are those? The paper appears to conclude by restating the original problem.

    • Dale,

      Small clarification: The last line of your comments suggests you think there is a paper. There is no paper. It is just a blog.

      Re. theories. There are two high-level theories that I note. You could tree down each. For instance, we could theorize about why people didn’t see it. I hint at one of the arguments, which is, that papers become known for one result. (You could argue that papers are also framed around one key result. You could also argue that authors are generally reluctant to highlight “controversial” results.) And that is how they are cited and discussed. And that to my mind is a loss.

      • Thanks for the clarification – I missed that, since the paragraph I was reading came right after directly quoted sections of the paper. In the words of Emily Latilla (not sure of the spelling and you need to be old to know the reference), “never mind.”

  2. > But a more accurate description of the paper is that the paper is best known for describing partisan prejudice but has powerful evidence on the lack of racial discrimination among white Americans–in fact, there is reasonable evidence of positive discrimination in one study.

    Hmmm.

    Seems to me that most people will most likely cite a paper when the main thesis, title, topic of conversation in the paper are relevant to what someone is writing about.

    I’d guess that many citations go something like (1) I want to make a statement in my paper, (2) I want a citation to back it up and then, (3) let me go look for some citations.

    The title and thesis and discussion in the paper in question are all directly about group polarization and party lines. It seems to me people citing that paper would naturally more likely be focused on that topic. Why would you expect much citation of that paper with reference to racial bias as a topic, when it was only secondarily about quantifying levels (or lack of) of racial bias?

    So this…

    > This likely leaves us with just two explanations: a) researchers hesitate to cite results that run counter to their priors or their results, b) people are simply unaware of these results.

    Seems to me to leave out the most likely explanation.

    Not to say that the “explanations” offered aren’t in play to some extent. But to say that they are the only possible explanations suggest the author has some strong and inflexible priors.

    • I have a question related to your point that researchers often focus on headlines. Does your point imply that researchers don’t always read the papers that they cite? And if that is the norm, is that acceptable or something that’s frowned upon? I would think that to cite a paper one should read and evaluate its utility first. And is that also the reason why bad papers or failed replications continue to get cited?

      • Anon –

        Maybe I should walk that comment back a bit.

        I wouldn’t know how to characterize the norm.

        I think that someone citing that paper would be likely to be someone writing/studying about partisan antipathy. That’s the main focus of the paper, so someone who has read that paper or is familiar with that paper is more likely to be someone interested in that topic. Someone who is writing about that topic is more likely to be familiar with or interested in that paper.

        So if seems only logical to me that aspect of the paper would generate many more citations than the other parts of the paper.

        I would also imagine that it’s not unheard of for people to be writing about a topic and have a viewpoint they’re trying to support and then go out in search of evidence in the literature to support that view. And so in their search, material that is directly on that topic would be more likely to get cited than material that might be indirectly focused on a topic. Sure, I’d imagine that some amount of that search would be influenced by titles of papers or the primary focus of the abstract. So Simone writing about racial bias would be more likely to cite articles that are directly on that topic.

        I think that kind of pattern should be included in the condition of all likely pathways towards citations.

      • In general it’s common for ppl to not read papers they cute, even experts citing their adversaries’ work. Sometimes ppl just look at the figures; sometimes they think they know the paper because they saw the talk; sometimes the just cite a paper because everyone else cites it. I’ve found citations in which the cited paper claims exactly the opposite of what the citing author claims. It runs the gamut.

  3. Two of the best things I ever did, at the time – in the late 1970s – never got published.

    First, I showed that melanoma patients at an East Coast medical center were the same as melanoma patients at a West Coast medical center – biology dominated, in other words. This was unexpected by my MD colleagues, as each coast said: “Our patients are different.”. I was just an ABD computer science lecturer, albeit in the #1 computer science department in the world, so there was no way I could get this published unilaterally, and said colleagues didn’t regard it as worth publishing. In my opinion, then, and today, is that it was an important scientific result.

    Second, I developed a matched pair survival analysis for patients who did and did not receive a problematic intervention. Albeit very tricky, my result was used in the clinic going forward to help patients understand their prospects. So, here I made something useful, but, again, there was no interest in publishing it.

    In hindsight, although I had barely a clue at the time, these were profound experiences on my way to becoming, eventually, a data scientist.

    On the one hand, it was a privilege to work with the caregivers – saints all, and deeply sophisticated – but on the other hand, the non-publication rankles me to this day …

  4. Ugh, the browser ate my first, nicely written-up reply, here’s the abstract:

    “Unsighted” discusses “Fear and Loathing”. FaL collects 4 studies; the first 3 have a racial component.

    Study 1 (Figure 4) says “African Americans showed a preference for African Americans, whereas whites displayed a somewhat stronger ingroup preference”.

    Study 2 contradicts this (Table 3, Figure 7): “Ingroup selection on the basis of race was con- fined to African Americans (73.1% selecting the African American), with European Americans showing a small preference for the African American candidate (55.8% se- lecting the African American).”

    In Study 3, “the effects of racial similarity proved negligible and not significant”.

    Each of the 3 studies has a different result! If you wanted to cite this paper to support a particular point, you’d need to either a) discuss these contradictions, or you’d need to b) omit at least 1 result to make the contradiction less apparent.
    “Unsighted” clearly went for option b), ignoring study 1.

    Personally, my guess is that the Study 2 result is skewed by a confounding factor. As Table 2 shows, racial affiliation is indicated by a given “extracurricular activity”, which is either “President of the Future Investment Banker Club” (EA) or “President of the African American Student Association” (AA). If the participants of the study are biased against investment bankers, that would skew the result towards selecting the AA.
    Therefore, I think that the racial part of the study 2 result doesn’t hold up to scrutiny.

    Because of these difficulties, I would not want to cite this paper if I was writing about racial bias.
    Unlike “Unsighted” asserts, I don’t believe it contains “important findings” for that field; it would be helpful if I was looking to cherry-pick misleading data to propagandize the notion that ‘African-Americans get a free ride’ or similar.

    P.S.: using the Barabasi metric, a paper that doesn’t get cited essentially cost nothing, right? ;-)

      • hey y’all,

        Re. Fig. 4, I write: “I exclude the IAT results, weaker than Banaji’s results, which show Cohen’s d ~ .22, because they don’t speak directly to discrimination.”

        “Study 2 … (Table 3, Figure 7): “Ingroup selection on the basis of race was con- fined to African Americans (73.1% selecting the African American), with European Americans showing a small preference for the African American candidate (55.8% se- lecting the African American).”

        In Study 3, “the effects of racial similarity proved negligible and not significant”.”

        I summarized the two as follows: “But a more accurate description of the paper is that the paper is best known for describing partisan prejudice but has powerful evidence on the lack of racial discrimination among white Americans–in fact, there is reasonable evidence of positive discrimination in **one** study.”

        Happy to learn where you think I may be missing something.

  5. The main point of the “Gaming Measurement” article is stated in the introduction:

    Self-reports are unsatisfactory. Like talk, they are cheap and thus biased and noisy. Implicit measures don’t even pass the basic hurdle of measurement—reliability. Against this grim background, economic games as measures of prejudice seem promising—they are realistic and capture costly behavior.

    This obviously looks great to economists: we can put a price on prejudice with this approach!

    The drawback is that game strategy and other factors can easily distort the results. I’ve mentioned the “investment banker” skew in my previous comment; “Gaming Measurement” itself explains how the $0 strategy in the dictator game has to be excluded to arrive at a result that us consistent with the other game. The way that people choose to play the game confounds things! How do we detect that reliably?

    I’m skeptical of the claim that economic games are “realistic” — they’re about as realistic as well-defined closed hypothetical philosophical problems. Observational studies at least deal with reality: if you find a confounding factor that leads to bias, you have learned something about the real world; if you find a confounding factor in your economic game, all you’ve learned is that your game sucks at its job.

    • This is not the main point of the article: “The main point of the “Gaming Measurement” article is stated in the introduction.” I recommend reading the article before coming to a judgment.

      The point you call out that economic games may not be as realistic (and some of it depends on how they are done) seems about right to me.

  6. I found these posts very interesting because I’ve been working in this area for some time. Regarding the point about the one-shot nature of the games, my coauthor and I have a finding in our 2013 paper (which was online in 2011) showing that there is no partisan trust discrimination in the US for reciprocity. We argue this occurs since subjects had a behavior, amount of trust, to use as information and not just an identity (partisanship). This fits with the idea that once a relationship based on behavior occurs, discrimination on an attribute like partisanship is unlikely to happen (point 1 in the second blog post). We disagree that this means the one-shot games are irrelevant. There are often initial interactions, or potential interactions, that people engage in with knowledge of the other person’s partisanship (or race, gender, class, etc.). Campaign season is full of yard signs, bumper stickers, and the like. And related to blog post 1, I’m not sure this finding has ever been cited.

    https://link.springer.com/article/10.1007%2Fs11109-011-9181-x

    As for IW finding of a lack of racial bias, we published a paper in BJPS at the same time as their paper, and it has similar results regarding a lack of racial discrimination in the US and South Africa. Additionally, we found no class bias in El Salvador (and there is an additional SA-specific paper looking at the links between PID and race). Again, this part of the paper is much less cited for racial/class findings than the partisanship finding.

    doi:10.1017/S0007123415000526

    Finally, on the general question of the value of games as a measurement tool, Blum et al. have a new piece in Political Analysis that examines the relative value of different approaches for measuring group bias. It’s a careful analysis and highlights the potential relative value of games vs. other tools and is generally skeptical of games.

    http://www.doi.org/10.1017/pan.2020.37

    Additionally, my coauthors and I did a meta-analysis of anonymous trust games vs. standard GSS-type interpersonal trust questions. Overall, the differences between the measures are much smaller once you account for measurement noise (for both approaches). The results point to the cost-benefit value of using simple survey questions over complex games to measure social preferences.

    https://www.oecd-ilibrary.org/economics/measures-of-interpersonal-trust_333c8ed0-en

  7. The last time I saw an analysis, the argument was that people tend to cite what other people have cited (eg. peer-reviewers groping for something to say often try “why don’t you cite $the_only_other_article_I_remember_on_this_topic”). So to get a study read, it has to attract the attention of an influential researcher or be cited in an influential paper. They looked at well-cited articles which started out obscure, and found that most of them were cited in a famous paper just before citations started to rise. This is one reason why researchers often publish more or less the same study multiple times- if how much attention any one version gets is a roll of the dice, republishing lets them roll the dice again.

    • Sean:

      When I was a young professor preparing my promotion review, one of the senior faculty asked me why, if a certain result of mine was so important, that I’d only published one paper on it. I told him this was because I got it right the first time.

      This is a great story, but as the years went by, I realized I hadn’t quite got it right the first time, so indeed I’ve published a couple more papers on the topic, as the method has evolved.

Leave a Reply to Mark Samuel Tuttle Cancel reply

Your email address will not be published. Required fields are marked *