Yesterday, I wrote about the problems surrounding New York City’s release of teacher scores (more information here). Well, I briefly looked at the NY Times website, which to its credit included error terms (not that most people will have an idea what a standard deviation is…), and I think most people are going to be confused. I randomly chose a school, J.H.S. 143 Eleanor Roosevelt, and looked at some of the teacher data. Several teachers teach different grade levels, so it was possible to get multiple estimates of teacher performance. Here’s one teacher, who teaches sixth and eighth grade math:
The number in the graphic is the percentile ranking of the teacher based on how much their students improved over the year. The three numbers below the number of students are expressed in terms of standard deviations above or below the citywide mean. Pretty darn good, especially this! Then we look at this eighth grade class:
Uh-oh. We see the same pattern with another teacher, who also appears to be performing worse this year than other years. Sixth grade:
The point isn’t to call these teachers out, but highlight just how variable these scores can be, from year to year, and cohort to cohort: it appears that most teachers at this school did worse this year than in previous years. In other words, not all entering classes are alike (and they probably differ among schools too).
And how is a parent supposed to figure out if a teacher is a ‘good teacher’ with this kind of variability?
As Bill Gates pointed out, only an idiot would use personnel evaluations this way.
This is not going to improve the quality of teaching at all.
An aside: I’ll go out on a limb and speculate that if the yearly data were released, then we would see just how variable these scores really are.