Joan Nix writes:
Your comments on this paper by Scott Carrell and James West would be most appreciated. I’m afraid the conclusions of this paper are too strong given the data set and other plausible explanations. But given where it is published, this paper is receiving and will continue to receive lots of attention. It will be used to draw deeper conclusions regarding effective teaching and experience.
Nix also links to this discussion by Jeff Ely.
I don’t completely follow Ely’s criticism, which seems to me to be too clever by half, but I agree with Nix that the findings in the research article don’t seem to fit together very well. For example, Carrell and West estimate that the effects of instructors on performance in the follow-on class is as large as the effects on the class they’re teaching. This seems hard to believe, and it seems central enough to their story that I don’t know what to think about everything else in the paper.
My other thought about teaching evaluations is from my personal experience. When I feel I’ve taught well–that is, in semesters when it seems that students have really learned something–I tend to get good evaluations. When I don’t think I’ve taught well, my evaluations aren’t so good. And, even when I think my course has gone wonderfully, my evaluations are usually far from perfect. This has been helpful information for me.
That said, I’d prefer to have objective measures of my teaching effectiveness. Perhaps surprisingly, statisticians aren’t so good about measurement and estimation when applied to their own teaching. (I think I’ve blogged on this on occasion.) The trouble is that measurement and evaluation take work! When we’re giving advice to scientists, we’re always yammering on about experimentation and measurement. But in our own professional lives, we pretty much throw all our statistical principles out the window.
P.S. What’s this paper doing in the Journal of Political Economy? It has little or anything to do with politics or economics!
P.P.S. I continued to be stunned by the way in which tables of numbers are presented in social science research papers with no thought of communication with, for example, tables with interval estimate such as “(.0159, .0408).” (What were all those digits for? And what do these numbers have to do with anything at all?). If the words, sentences, and paragraphs of an article were put together in such a stylized, unthinking way, the article would be completely unreadable. Formal structures with almost no connection to communication or content . . . it would be like writing the entire research article in iambic pentameter with an a,b,c,b rhyme scheme, or somesuch. I’m not trying to pick on Carrell and West here–this sort of presentation is nearly universal in social science journals.