File this under ‘why value-added measurement (aka ‘VAM’) doesn’t work in practice’ (boldface mine):
Two years ago, I was lauded for my students receiving extremely high state test scores….
Last year, many of my students had had the highest scores on the state tests possible the year prior—a 5 out of 5. That’s how they get in to my class of gifted and high achieving students. Except, last year, they raised the bar so that the same 5th graders who scored 5s in 4th grade were much less likely to earn 5s in math and reading in 5th grade. Some still DID score 5s in math AND reading, yet were still deemed not to have made sufficient progress because they did not score as high within the 5 category as they had the year before.
It’s like expecting the members of an Olympic pole vaulting team to all individually earn gold medals every time the Olympics come around, regardless of any other factors affecting their lives, with the bar raised another five inches each go around. In a state where 40% of students pass the 5th grade science test, 100% of my students passed; but no one (at the state level) cares about science scores.
Therefore, I suck.
You must understand the limitations of your data.
Maybe those assessments aren’t very robust after all…