“The most discriminatory federal judges” update

Christian Smith writes:

Thanks for your commentary on the white paper about judges, which I wrote earlier this year with some colleagues [Nicholas Goldrosen, Maria-Veronica Ciocanel, Rebecca Santorella, Chad Topaz, and Shilad Sen]. I just wanted to let you know that we substantially altered that paper in light of reasonable feedback, replacing the original OSF entry with a much humbler one. This Twitter thread explains further. If you’re up for clarifying in your blog post that the quoted excerpt is no longer in the white paper, I would appreciate that a lot.

Here’s the new version, which has the following note on page 1:

A previous version of this work included estimates on individually identified judges. Thanks to helpful feedback, we no longer place enough credence in judge-specific estimates to make sufficiently confident statements on any individual judge. We encourage others not to rely upon results from earlier versions of this work.

As I wrote in my earlier post, I don’t have anything to say on the substance of this work, but I’ll again share my methodological comments, generic advice of the sort I’ve given many times before, involving workflow, or the trail of breadcrumbs:

What I want to see is graphs of the data and fitted model. For each judge, make a scatterplot with a dot for each defendant they sentenced. Y-axis is the length of sentence they gave, x-axis is the predicted length based on the regression model excluding judge-specific factors. Use four colors of dots for white, black, hispanic, and other defendants, and then also plot the fitted line. You can make each of these graphs pretty small and still see the details, which allows a single display showing lots of judges. Order them in decreasing order of estimated sentencing bias.

2 thoughts on ““The most discriminatory federal judges” update

  1. > Y-axis is the length of sentence they gave, x-axis is the predicted length based on the regression model excluding judge-specific factors.

    What’s the logic for prediction on X and actual measurement on Y again? If I wanted to put some sort of uncertainty bar on the prediction now it’s an X thing which seems mildly inconvenient.

    • Ben:

      At a practical level, you can graph the uncertainty bars horizontally. Regarding the question of why to graph outcome as a function of prediction rather than graphing prediction as a function of outcome, see Section 11.3 of Regression and Other Stories.

Leave a Reply

Your email address will not be published. Required fields are marked *