Skyler Johnson writes:
You should definitely weigh in on this…
Pro Publica created “Surgeon Scorecards” based upon risk adjusted surgery compilation rates. They used hierarchical modeling via the lmer package in R.
For detailed methodology, click the methodology “how we calculated complications” link, then atop that next page click on the detailed methodology to download a publication quality pdf).
At least three doctors have raised objections:
Curious as to your critique of Pro Publica’s methodology and results.
Next time this sort of thing is done, maybe they’ll use Stan. But that’s not really the point. The real point is that, yes, I probably should weigh in on this, but it would take a bit of work! This is not your run-of-the-mill “p less than .05” paper in PPNAS, it’s a serious project.
I quickly read through the online critiques and I saw some good points and some bad points. The bad points were some generic ranting against “shrinkage”; the blogger in question didn’t seem to realize that these issues arise in any prediction problem and represent inferential uncertainty that is the inevitable consequence of variation. Another blogger complained about wide uncertainty intervals but, again, that’s just life. The more important criticisms involved data quality, and that’s something I can’t really comment on, at least without reading the report in more detail.
It’s too bad. Something dumb like himmicanes and hurricanes is easy to criticize, easy for me to post on. But an important topic like rating doctors, that would require a lot more work for me to say anything definitive.
I will say, though, that I like what Pro Publica is doing. No model is perfect, but I think this is the way to start: You fit a model, do the best you can, be open about your methods, then invite criticism. You can then take account of the criticisms, include more information, and do better.
So go for it, Pro Publica. Don’t stop now! Consider your published estimates as a first step in a process of continual quality improvement.