Skip to content

Schools’ report cards anger NYC parents

Kenny passed on this link, which is related to a project that Jennifer and I are involved in, on comparing New York City public schools:

Thanks to heavy parent involvement and high test scores, Public School 321 in Park Slope, a yuppie neighborhood in Brooklyn, is considered a gem of New York City’s public school system. In the eyes of New York’s Department of Education, however, P.S. 321 deserved just a B in the city’s first-ever school report cards, which are based largely on how students score on standardized tests. Such accountability efforts — widespread since the advent of the federal No Child Left Behind Act — have raised the hackles of parents and educators across the country. . . .

James Liebman, chief accountability officer for New York City schools, devised the grading system for the city’s 1.1 million-pupil school system. Liebman said standardized tests are a good measure of whether students have learned what they should know. “If children can’t read and they can’t do math, then the educational system and their school have failed them,” he said. . . . Liebman pointed to a Quinnipiac University poll in which voters said the grades were fair by a margin of 61 to 27 percent. “It’s a system to provide information to parents to make their judgments,” he said.

I’ve talked with Jim about the school evaluations but I don’t know exactly how they finally decided to do it. Perhaps they used School Status to track the progress of their students. I know using that software allows you to see the academic achievements, monitor attendance and check up on students, making sure they’re in the right place with their studies. Using such software, teachers can also see if there has been a dip in their progress or if there any topics students might be struggling with. One of the challenges in doing this sort of rating is that the evidence seems to show that teachers, rather than schools, have the biggest effects on test scores. To a first approximation, the effect of the school seems to be pretty much the average of its teacher effects.

Regarding criticisms of the evaluations: one way the evaluations themselves can be evaluated is to apply them retroactively and see how well they predict future performance, to estimate the answer to the following question: if you were to send your kid to a highly-graded or poorly-graded school, how much different would you expect his or her test scores to be in a year, or two years, or whatever.

Education as we know it has dramatically changed over the past few years. For young learners, in particular, lessons are more interactive than ever before. With elements such as youtube videos and songs entering the curriculum, it is often the case that the best schools are those that embrace modern technology and teaching opportunities to help kids thrive.

Beyond this, I think one of the motivations for getting these evaluations out there is to put some pressure on the schools. I have to say that I think our teaching at the university level would be improved if our students had to take standardized tests after each of our courses and we were confronted with evidence on how much (or how little) they learned.


  1. skoolboy says:

    In my opinion, Liebman is grossly distorting what the poll said. He quotes the Quinnipiac poll as concluding that the school grades are fair by a margin of 61 to 27 percent. The question asked in the poll was, "In general, do you think that a report card system is a fair or unfair way of grading NYC public schools?" The question asked about a report card system in the abstract, not the particular report card system that was released in October. And, given that the survey also found that only 33% of respondents knew the results for the schools in their neighborhood, it's hard to conclude that they were making a judgment about the NYC report cards.

    But a more important question I'd like you to address has to do with the uncertainty around estimated test score gains, which comprise the lion's share (55%) of the overall school letter grade. As I understand the calculation of the letter grades, schools are compared to 40 peer schools, and a school is located within the distribution of gains for these 40 schools. But there is no consideration of the uncertainty in these estimated gains, which results in differences among schools being treated as real, when they might be statistically indistinguishable, based on the standard errors of the estimated gains. Did the issue of uncertainty come up in your conversations with Liebman?

  2. jb says:

    Very interesting. Is there adequate evidence supporting the validity of this use/interpretation of student test scores? The parents and teachers quoted in the article seem to be voicing validity concerns, and I think they deserve more than:

    "Standardized tests are a good measure of whether students have learned what they should know."

    The predictive study you propose would be one source of validity-supporting evidence, but such evidence must be established before the test is used in this way.

    Hopefully, your project addresses these concerns (see Chapter 2 of The Standards for Educational and Psychological Testing, AERA, APA, NCME, 1999), and it was the reporter who chose Jim's soundbite rather than presenting real evidence.

  3. Alfred says:

    I think there are way too many variables here. Frankly, I think the main problem with NYC's public schools are that anyone with money automatically throws their kid into a private school. Thus, the public schools just become rejects in the eyes of everyone. This damages the motivation of both the students and the teachers to excel.