Skip to content
Archive of posts filed under the Sociology category.

Basbøll’s Audenesque paragraph on science writing, followed by a resurrection of a 10-year-old debate on Gladwell

I pointed Thomas Basbøll to my recent post, “Science is science writing; science writing is science,” and he in turn pointed me to his post from a few years ago, “Scientific Writing and ‘Science Writing,’” which stirringly begins: For me, 2015 will be the year that I [Basbøll] finally lost all respect for “science writing”. […]

The rise and fall and rise of randomized controlled trials (RCTs) in international development

Gil Eyal sends along this fascinating paper coauthored with Luciana de Souza Leão, “The rise of randomized controlled trials (RCTs) in international development in historical perspective.” Here’s the story: Although the buzz around RCT evaluations dates from the 2000s, we show that what we are witnessing now is a second wave of RCTs, while a […]

How science and science communication really work: coronavirus edition

Now that the election’s over, we can return to our regular coronavirus coverage. Nothing new since last night, so I wanted to share a couple of posts from a few months ago that I think remain relevant: No, there is no “tension between getting it fast and getting it right”: On first hearing, this statement […]

“In the world of educational technology, the future actually is what it used to be”

Following up on this post from Audrey Watters, Mark Palko writes: I [Palko] have been arguing for a while that the broad outlines of our concept of the future were mostly established in the late 19th/early 20th Centuries and put in its current form in the Postwar Period. Here are a few more data points […]

Stop-and-frisk data

People sometimes ask us for the data from our article on stop-and-frisk policing, but for legal reasons these data cannot be shared. Other data are available, though. Sharad Goel writes: You might also check out stop-and-frisk data from Chicago and Seattle. And, if you’re interested in traffic stop data as well, see our Open Policing […]

As a forecaster, how important is it to “have a few elections under your belt”?

Kevin Lewis pointed me to this comment from Nate Silver on a recent post: Having a few elections under your belt helps a *lot*. No matter how much you test things in the lab, there are some things you’re going to learn only by seeing how your forecast reacts to real data in real time. […]

“Fake Facts in Covid-19 Science: Kentucky vs. Tennessee.”

I’m writing this on 24 Apr 2020. I’ve been posting coronavirus items immediately and pushing previously scheduled material to the end of the queue (currently Oct and Nov). But this one is already forgotten so I might as well put it in lag. When it appears, you can read it and put yourself in the […]

My proposal is to place criticism within the scientific, or social-scientific, enterprise, rather than thinking about it as something coming from outside, or as something that is tacked on at the end.

I happened to come across this discussion in the comments of another blog a few years ago and it seemed worth repeating. The background is that sociologist Fabio Rojas explained why he didn’t find it useful to teach “critical thinking”; instead he teaches whatever aspects of criticism are relevant for the subject (in his case, […]

My theory of why TV sports have become less popular

There’s been a lot of discussion recently about declining viewership for TV sports. Below I’ll link to a news article discussing various possible explanations, but first I want to share my theory, which is that we’re watching less sports because we’re talking about sports less, and we’re talking about sports less because we’re mixing with […]

Don’t Hate Undecided Voters

This post is by Clay Campaigne, not Andrew. (It says ‘posted by Phil’, and that’s technically true, but I’m just a conduit for Clay here).  This is copied from Clay’s blog, which may have comments of its own so you might want to read it there too. Politics has taken on particular vitriol in recent […]

She’s wary of the consensus based transparency checklist, and here’s a paragraph we should’ve added to that zillion-authored paper

Megan Higgs writes: A large collection of authors describes a “consensus-based transparency checklist” in the Dec 2, 2019 Comment in Nature Human Behavior. Hey—I’m one of those 80 authors! Let’s see what Higgs has to say: I [Higgs] have mixed emotions about it — the positive aspects are easy to see, but I also have […]

“Everybody wants to be Jared Diamond”

As the saying goes, “Everybody wants to be Jared Diamond, that’s the problem.” (See also here and here.) The funny thing is, this principle also applies to . . . Jared Diamond himself! See this review by Anand Giridharadas, sent to me by Mark Palko.

Response to a question about a reference in one of our papers

Tushar Sunkum writes: I like this particular study that you did [with Jeff Fagan and Alex Kiss] on racial profiling. However, I believe that you misrepresented one of the sources on the paper. You state, “For example, two surveys with nationwide probability samples, completed in 1999 and in 2002, showed that African-Americans were far more […]

Social science and the replication crisis (my talk this Thurs 8 Oct)

My talk at the WZB Berlin Social Science Center 3pm (Central European Time): Social science and the replication crisis The replication crisis is typically discussed in the context of particular silly claims, or in terms of the sociology of science, or with regard to controversies in statistical practice. Here we discuss the content of unreplicated […]

The view that the scientific process is “red tape,” just a bunch of hoops you need to jump through so you can move on with your life

Summary Awhile ago I hypothesized that many researchers “think they already know the truth, and they think of discussions of evidence, data quality, statistics, etc., as a sort of ‘red tape’ or distraction from the larger issues.” But now I’m thinking that it’s not just statistics but really the entire scientific process that they view […]

Uri Simonsohn’s Small Telescopes

I just happened to come across this paper from 2015 that makes an important point very clearly: It is generally very difficult to prove that something does not exist; it is considerably easier to show that a tool is inadequate for studying that something. With a small-telescopes approach, instead of arriving at the conclusion that […]

It’s kinda like phrenology but worse. Not so good for the “Nature” brand name, huh? Measurement, baby, measurement.

Federico Mattiello writes: I thought you might find this thread interesting, it’s about a machine learning paper building a “trustworthiness score” from faces databases and historical (mainly British) portraits. It checks many bias boxes I believe, but my biggest complaint (I know it shouldn’t be) is the linear regression of basically spherical clouds of points: […]

“Postmortem of a replication drama in computer science”

Rik de Kort writes: This morning I stumbled across a very interesting blog post, dissecting some drama related to a non-replicating paper in computer science land. The question the paper tries to answer is whether some programming languages are more error prone than others. For a paper in computer science I would expect all their […]

His data came out in the opposite direction of his hypothesis. How to report this in the publication?

Fabio Martinenghi writes: I am a PhD candidate in Economics and I would love to have guidance from you on this issue of scientific communication. I did an empirical study on the effect of a policy. I had an hypothesis, which turned out to be wrong, in the sense the the expected signs of the […]

Taking the bus

Bert Gunter writes: This article on bus ridership is right up your alley [it’s a news article with interactive graphics and lots of social science content]. The problem is that they’re graphing the wrong statistic. Raw ridership is of course sensitive to total population. So they should have been graphing is rates per person, not […]