Howard Wainer writes:

On September 22, 2008, the New York Times carried the first of three articles about a report, commissioned by the National Association for College Admission Counseling, that was critical of the current college admission exams, the SAT and the ACT. The commission was chaired by William R. Fitzsimmons, the dean of admissions and financial aid at Harvard.

The report was reasonably wide-ranging and drew many conclusions while offering alternatives. Although well-meaning,

many of the suggestions only make sense if you say them fast.

Among their conclusions were that schools should consider making their admissions “SAT optional,” that is allowing their applicants to submit their SAT/ACT scores if they wish, but they should not be mandatory. The commission cites the success that pioneering schools with this policy have had in the past as proof of concept.

Howard continues:

Has the admissions process been hampered in schools that have instituted an SAT optional policy?

The first reasonably competitive school to institute such a policy was Bowdoin College, in 1969. Bowdoin is a small, highly competitive liberal arts college in Brunswick, Maine. A shade under 400 students a year elect to matriculate at Bowdoin, and roughly a quarter of them choose not to submit their SAT scores. . . .

As it turns out the SAT scores for the students who did not submit them would have accurately predicted their lower performance at Bowdoin. In fact the correlation between grades and SAT scores was 12% higher for those who didn’t submit them than for those who did.

So not having this information does not improve the academic performance of Bowdoin’s entering class — on the contrary it diminishes it. Why would a school opt for such a policy? Why is less information preferred to more? . . .

We see that if all of the students in Bowdoin’s entering class had their SAT scores included the average SAT at Bowdoin would sink from 1323 to 1288, and instead of being second among these six schools they would have been tied for next to last. Since mean SAT scores are a key component in school rankings, a school can game those rankings by allowing their lowest scoring students to not be included in the average. I believe that Bowdoin’s adoption of this policy pre-dates US News and World Report’s rankings, so that was unlikely to have been their motivation, but I cannot say the same for schools that have chosen such a policy more recently.

Interesting. Howard has some data showing that, unsurprisingly, the students who don’t supply their SAT are mostly (but, interestingly, not always, those scoring lower):

(I don’t find the y-axis on this graph very helpful, but that’s another story.)

So what’s the deal? Who are those kids with 1450 SAT’s who aren’t submitting their scores?

**This reminds me . . .**

Sound psychometric (i.e., statistical) principles tell us that, if we have an applicant who’s taken a test multiple times, to use his or her average score. But for our PhD admissions, we generally take the higher score. I understand our psychological reasons for doing this–we want to think the best of a person–but, statistically, it seems like a bad idea.

The graph looks too regular to be real.

If, for a moment, we assume that what's graphed is not the actual data but normal curves based on mean and standard deviation, we need not assume there are ANY students with 1450 who aren't submitting. We could instead assume the person who did the graph is ignoring a skew.

If that's the case, send this over to "Junk Charts".

I'll ask this question over at Howard's site. Thanks for the link.

Yah, good point.

Actually, *any* plot that uses Comic Sans should go immediately in the Junk Chart bin…

Howard is a great advocate of graphing and has several highly recommended books on the topic. The chart does carry off the message well, and the larger statistical point about self-selection in observational studies is well worth noting.

I'm also perplexed by the vertical axis, and I'd like to see the right side of the distribution. I think the cutoff at

I'm making a mess of this. Now I realize that they don't like my less-than sign. Here is the middle of the message:

I think the cutoff at less than 1600 (the max) dramatized the point Andrew made about high-scorers. As for empirical distribution, the data may be too sparse for that.

I posted my comment above on Howard's site and got the following response from him:

"Yes these curves are normal approximations – done to ease viewing. It is not a bad approximation. The actual dist'ns of SAT scores for non SAT-submitters have a thinner upper tail (as you saw) and a lump of a half dozen or so at around 950. In the full version of this I include the actual dist'ns so you can see that the normal approximation is OK for the purposes at hand. I can send you the full report if you are interested — send me your name and email address."

So, we need not assume there are actual individuals who scored 1450 but are so high minded that they don't repor these scores.

Kaiser, I cheerfully concede that if I knew half of what Howard Wainer knows about graphing I'd be a lot better at it. I like the idea of showing the actual frequency distribution, then showing the graph above with the (easier to understand) smooth curves. I'm guessing that's what occurs in the full report.

But it's a blog post extracting from a full report, and things that may be perfectly clear in the report can get muddled a bit.

This ignores unobserved heterogeneity. Suppose there are some "brilliant" – i.e. not just very smart, but brilliant, a very rare attribute – types out there that are nonetheless bad in SAT type tests (they are really boring).

Schools seeking these candidates must be willing to forgo SAT rules, but that in turn opens the door to a whole bunch of mediocre candidates.

Is this a lottery worth buying? Apparently.

The issue is the variance of this pool, the range, and the fatness of the tails.