The NAEP (The National Assessment of Educational Progress) released the 2015 scores, and, surprisingly, the ensuing discussion was calm and supported with rigorous data-analysis.
HA HA! I make the funny! Not so much, but that’s not what this post is about. Instead, it’s worth noting that the scores, as I predicted, changed very little. I haven’t gone through state by state, but I’ve looked at the national data, large cities, Alabama, Connecticut, District of Columbia, Florida, Maryland, Massachusetts, Mississippi, New Jersey, and Texas scores broken down by ethnicity and low-income status (as I’ve discussed elsewhere, how poor students are categorized can have very significant effects, but you work with the data you have).
Basically, when you compare the 2013 to 2015 scores for these various demographic and geographic groups decreased, in most cases, a few points (2 – 4), though a few groups did worse than that. So this looks like a small, but ‘real’ decrease. To put this is perspective, a student in at the 50th percentile in 2015 would score somewhere around 46 – 48% percentile in 2013.
While it’s not worth panicking at this point (not by a long shot), it is worth considering a few hypotheses:
1) The test questions were unintentionally harder this year. The NAEP uses a sampling procedure and students do not face the whole battery of questions. This might lead to a minor, across the board decrease.
2) Students are tired of all of the damn tests. Over the last few years, students are taking a lot more of these kinds of tests, and, unlike the NAEP, those tests matter (in some cases, students can be held back). They might not be motivated to do well on tests that really don’t matter, and that their teachers and principals don’t care about.
3) Teacher retention issues have lowered scores. Schools across the country are losing experienced teachers. Maybe this costs a point or two? (For the record, I’ve never denied that teachers have a small effect on outcomes).
Unfortunately, all of the discussion is about are we doing better obscures, to me, what is a neglected issue: the massive state to state differences. If we look at white students who are not low income and whose parents graduated from college, in eighth grade math Massachusetts students score 315, while the same students in Alabama score 289 and in Mississippi 295. To put this in context, within this socioeconomic group, the average Alabama student would place at the nineteenth percentile in Massachusetts while the average Mississippi student would place at the 25th percentile. Put another way, the state to state gap for these states is comparable to the economic gap.
Yet, once again, this will not be discussed, even though there are probably things we could learn from this.