And you can blame the value-added approach. Since 2007, New York City has spent $75 million to examine if teacher incentive pay–often referred to as merit pay–could increase student achievement, using both public and private funds. Merit pay, by the way, is a big favorite of education ‘reformers’, and Obama is a big fan and has supported the Teacher Incentive Fund. The result? Epic fail (boldface mine):
New York City’s heralded $75 million experiment in teacher incentive pay — deemed “transcendent” when it was announced in 2007 — did not increase student achievement at all, a new study by the Harvard economist Roland Fryer concludes.
“If anything,” Fryer writes of schools that participated in the program, “student achievement declined.” Fryer and his team used state math and English test scores as the main indicator of academic achievement….
The program…also had no impact on teacher behaviors that researchers measured. These included whether teachers stayed at their schools or in the city school district and how teachers described their job satisfaction and school quality in a survey.
The program had only a “negligible” effect on a list of other measures that includes student attendance, behavioral problems, Regents exam scores, and high school graduation rates, the study found.
So why the failure? One reason is the way success was measured–the reformers’ favorite method, value-added testing (boldface mine):
Fryer rejects several explanations. He argues that the $3,000 bonus (just 4 percent of the average annual teacher salary in the program) was not too small to make a difference, citing examples of effective programs in India and Kenya that gave out bonuses that were an even smaller proportion of teachers’ salaries. He also rejects the possibility that schools’ decisions to use group, rather than individual, incentives was the problem, citing a 2002 study of a program in Israel that used group incentives.
Instead, he says the challenge is that American plans aren’t clear about what teachers can do to receive the reward. In New York City, the bonuses didn’t come simply if students’ test scores rose; the test scores had to rise in comparison to a group of similar schools. So did other measures considered by the city report card, including the surveys that ask students, teachers, and parents for subjective opinions about schools.
Fryer argues that the complexity made it “difficult, if not impossible, for teachers to know how much effort they should exert or how that effort influences student achievement.”
It’s not just the complexity, although that certainly didn’t help. It’s the basic inaccuracy of an analytical method that has a high-level of variability–even if a school did dramatically improve the scores might not reflect that. Worse, the assessment is obviously zero-sum: if your students improved, but the comparison schools did too (and assuming these are genuine improvements), then no merit pay for you.
Someday, ‘reformers’ will realize that, unlike Wall Street which has a sociopathic tinge to it, teachers, like many people, are motivated by things other than money. After all, if your ‘successful’ bankster [Romney link] were in such a system, he would have probably found a way to lower the scores of other schools, consequences to the students be damned.
We are governed by fools and sociopaths.