How to Game Educational Metrics

Before I get to an interesting Boston Globe op-ed, it’s worth noting that how ‘good schools’ are defined depends on the details. One definition simply involves seeing how well students perform and using that to determine if a school is ‘good’ (i.e., a score, grade or ranking). Another way is to use some kind of method that might allow you to determine how much progress a student has made–the concept being that not all students start at the same place and schools shouldn’t be penalized or rewarded for drawing from low- or high-performing student populations. There are statistical and methodological issues with these approaches, but at least they’re a step–or a drunken lurch–in the right direction.

That said, we turn our attention to the Commonwealth of Massachusetts (God save it!; boldface mine):

MCAS growth scores — Student Growth Percentile — show how much progress students are making in raising their scores. Based on standardized test scores, they do remain a limited measure of school quality. But growth calculations are far preferable to absolute scores because they better reflect the contribution of the school, and not just a child’s socioeconomic status.

In the past few months, the state board of education moved to include growth scores as one fifth of the calculation that determines the charter cap — bringing it in line with the calculation used for other state accountability purposes. And there was even discussion of moving further in the direction of growth scores. Yet at last month’s board meeting, the board postponed a decision after resistance from charter school advocates, who fear losing the access they’ve long enjoyed. If growth scores are more heavily weighted in the calculation, charter expansion will be directed toward smaller, higher-income districts where parents often assume that traditional public schools are of adequate quality, and where they may not be attracted to alternatives. Their resistance, then, is more about self-preservation than it is about serving students in the weakest schools. But given their political clout, we are now regressing toward a measure that inaccurately and unfairly identifies ineffective schools.

Neither of us has much confidence in the exclusive or near-exclusive reliance on test scores in any permutation to measure something as complex as school quality. In fact, what we favor is a multi-dimensional model that goes far beyond such narrow measures.

Still, if we are going to rely on scores for fundamental decisions that impact so many lives, we have to do so in the fairest way possible. And that means adopting a measure that won’t so blatantly disadvantage schools and districts working with high-needs students.

It’s hard to take education reformers seriously when they are actively gaming the system to suit their own needs, not children’s.

This entry was posted in Education, Massachusetts. Bookmark the permalink.

1 Response to How to Game Educational Metrics

  1. Andrew D says:

    This is just another example of Pournelle’s Iron Law of bureaucracy:-

    “In any bureaucracy, the people devoted to the benefit of the bureaucracy itself always get in control and those dedicated to the goals the bureaucracy is supposed to accomplish have less and less influence, and sometimes are eliminated entirely.”

Comments are closed.