Visual diagnostics for discrete-data regressions

Jeff asked me what I thought of this recent AJPS article by Brian Greenhill, Michael Ward, and Audrey Sacks, “The Separation Plot: A New Visual Method for Evaluating the Fit of Binary Models.” It’s similar to a graph of observed vs. predicted values, but using color rather than the y-axis to display the observed values. It seems like it could be useful, also could be applied more generally to discrete-data regressions with more than two categories.

When it comes to checking the model fit, I recommend binned residual plots, as discussed in this 2000 article with Yuri Goegebeur, Francis Tuerlinckx, and Iven Van Mechelen.

6 thoughts on “Visual diagnostics for discrete-data regressions

  1. This gives you a visualization of the 0/1 loss of a classifier based on a model. Why not plot the residual with graded colors in a heat map. You could further sort the 0/1 true outcomes y and within the two bins, sort by predicted y_hat. But that’d lead to different orderings for different estimates of y-hat.

    You see this kind of plot in natural language processing where it is common to plot the predicted probability of the each word in a sequence on a negative log scale (poorer predictions are thus taller). It can give you a feeling for the variation in predictions beyond what you get with summary stats like average log loss per word (or character). You could apply that same kind of plot here, plotting -log(z) where z is the probability assigned by the model, that is y_hat if y is 1 and (1 – y_hat) if y is 0.

  2. Andrew Pierce and I also have a diagnostic for checking the model fit that’s related to the binned residual idea that you offered. Instead of grouping observations into discrete bins, it nonparametrically determines an empirical comparison for the fitted p-hat using nearby observations with similar p-hats. The bonus is that we have a way to assess whether deviations from “perfect” fit (p-hat matching the true p exactly) are ascribable to sampling variation or something else (e.g. misspecification).

    http://userwww.service.emory.edu/~jesarey/modelfit.pdf

    If you have any comments or suggestions, I’d be greatly indebted to you!

  3. Glancing at your 2000 paper you referenced, I note that residuals related to the latent variables did poorly. Would you care to put this in the larger context of the
    question as to whether latent variables are actually a representation of reality or just a convenient mathematical abstraction for the convergence of EM type models?

Comments are closed.