So Monday’s post seemed to set off the charter school advocates. I’ll blame instructor error, but the little Twitter storm that erupted didn’t seem to deal with the larger–and factually accurate–claim that Massachusetts built one of the best educational systems in the world and in the U.S. and that it might be worth emulating as opposed to changing it. Not only could most states do far worse than Massachusetts, but we really wouldn’t have to reinvent the wheel. There was also no discussion about changing from the MCAS to the PARCC, which I think will also have bad results.
But fucking charters it is. For long-time readers, I’m going to try (and probably fail) to be polite–just this once, even though I was somehow supposed to craft a coherent response on Twitter after carefully reading technical articles while at work (by the way, never assume someone is unfamiliar with the literature. But I’m getting ahead of myself). The charge, led by Adam Ozimek, focused on a couple of papers by Angrist et al.–though not all of the papers (again, getting ahead of things…). They’re good papers*, and essentially compare students who wanted to attend charters but lost the lottery and wound up in regular public schools (note that ‘unpopular’ charters can’t be assessed with this method). There is one methodological issue**, which is the conflation of free-lunch and reduced-lunch students–in areas with lots of low-income students, failing to separate these two groups can lead to real problems (in poor communities, reduced-price lunch is a sign of relative advantage). But, for the sake of argument, let’s not worry about this (still trying to be polite…).
What Angrist et alia find in the two papers I’ve linked above is that the students in the charters do much better: typically between 0.2 to 0.3 standard deviations per year. That is a really large difference. To put this in context, this increase would be comparable to a gain of six to nine points on the NAEP. While this massive gain makes me think Campbell’s Law might be partially operative here (or something else; see below), we’ll take the data at face value.
Here’s the interesting thing. There’s another study by many of the same people that looked, not at some Boston schools and New Orleans schools, but at charters in the state of Massachusetts. As before, using the same methodology (lottery), students in Boston urban charter schools do significantly better (Table 8) than students in regular Boston public schools. However, students in non-urban charter schools do worse than the ‘lottery losers’ in regular public non-urban schools–and with the same effect as the gains in the urban schools.
Well, damn. Maybe those suburban mothers Arne Duncan despises are on to something after all.
While we’re on Table 8, you’ll notice that the effects of an observational study comparing charters to public schools show much weaker or non-existent gains for the schools that aren’t included in the lottery study. Make of that what you will, but it makes me think the demographic and classroom effects are harder to disentangle that it would appear (note: I’m not accusing anyone of intellectual dishonesty–such dishonesty would have led to not reporting the discrepancy). Given the urban/nonurban and lottery/non-lottery school differences, I have serious doubts about how broadly these findings can or should be applied and if charters can scale up. It doesn’t seem that charters, even in Massachusetts, are a tried and tested method that would work broadly at a state level–the nonurban results make that pretty clear. Frankly, I’ve never been certain how to translate what is interesting science (these studies) into state-wide policy.
Ok, onto the CREDO reports. There’s a problem with the report strategy which pairs charter students with public school students typically using these critera: grade-level, gender, race/ethnicity, free or reduced price lunch status, English language learner status, special education status, and prior test score on state achievement tests (e.g., the Ohio report). Pretty good, but still we have a problem. Fundamentally, it is unclear what effects classroom and school population composition have on test scores in this design. In other words, an low-income male African-American child in a low poverty school could perform much better than one who is demographically identical but in a high poverty school. This isn’t supposition: this has been observed. This is not an insignificant problem. A smaller problem (perhaps) is that CREDO lumps together variables that need to be differentiated (e.g., reduced- versus free-lunch or different kinds of disabilities)
Nonetheless, let’s continue. You’ll note I wrote CREDO reports plural. Once you start looking at the state level reports, it’s pretty obvious that within states, in some school systems, charters do better, some they do worse, and some have student performance similar to regular public schools. In Ohio, of the five districts examined, charters in one district perform better, but in three districts, they perform worse. In New Jersey (p. 16), the only district where charters performed better is Newark, where, coincidentally some guy named Zuckerberg spent a lot of money, while in every other district, charters performed worse (differences in spending aren’t accounted for, and they should be***). Even at the state level, it’s a mixed bag.
So onto some normative things. One could argue that the differences in charter schools has to do with different educational philosophies and styles. ‘Boot Up the Ass’ charters seem to do better in urban areas (though this is by no means a settled issue). It’s not clear how well subject matter test scores lead to improved long-term life outcomes–and if the argument is that education should integrate children into mainstream society, then certainly that matters. I’m sure Eva Moskowitz’s Success Academies will get higher scores (selective attrition and other issues notwithstanding). After all, Moskowitz proudly boasts of turning small children into “test-taking machines” (her phrase, not mine). That doesn’t sound like education to me. Might even border on abuse. No upper-middle class or wealthy white family would subject their kids to that sort of education.
One could make the argument that the Boot Up the Ass charter philosophy should be applied only to poor children, and serve as the general (that is reading and mathematics) equivalent of language immersion. But it’s not clear that’s what parents want. Yes, in Boston, given the choice of sending their kids to Madison Park, a public school, or to a charter, many parents would choose the charter. Why? Because Madison Park, at last count, didn’t have enough books. Books. I suspect, given their druthers (and I wish someone would take on the druther crisis), these parents would rather send their kids to the Newton public schools. Or the Brookline public schools. Or Lexington’s.
Those schools don’t use a Boot Up the Ass philosophy, but they do have more resources–which matters: well-to-do white people, even if they preach hard-assed charters, don’t send their children to them. That seems noteworthy.
So to return to the original point I was trying to make in a previous post: Massachusetts, compared to other states, has done very well–it performs better than Blessed Finland! It’s not perfect, but it’s a system that could be widely adopted (and some states are). At the same time, it’s not clear that charters scale, and their effects, even within a state are all over the place.
Don’t reinvent a square wheel, go with what works.
*One can have arguments about the appropriateness of linear regression, but I’ve learned in my years in science that arguing about these sorts of things is never productive and only makes people very angry.
**Another issue is classroom composition. Typically, principals assign students to classes based on academic performance, and mostly ignore the socioeconomic variables that pundits on all sides, including me, fixate upon. Since teaching occurs at the classroom level, not the school level, this seems important (and for the statistically inclined, this is probably best represented by a nested model to boot). One good point about the Angrist studies is that, unlike earlier studies, students transferred out of charters aren’t ‘charged to’ public schools.
***More frustrating is the failure to account for how equal sums (or for that matter, unequal sums) of money are spent on different students. For example, having fewer special needs students, and less severe special needs students, frees up more money for ‘regular’ students. Where it has been examined, charters have fewer special needs students, and less severe special needs students (first figure).
From the New York Times: N.Y. / Region
“Report Faults Charter School Rules on Discipline of Students”
By ELIZABETH A. HARRIS FEB. 11, 2015
It seems that some charters may be expelling students in situations where the law would prohibit a public school from doing so. The charters apparent defense: we are not really public schools, we just take public money.
“One good point about the Angrist studies is that, unlike earlier studies, students transferred out of charters aren’t ‘charged to’ public schools.”
But according to the Angrist paper I looked at
“The effects of charter attendance are modeled as a function of years spent attending a charter school.”
Given the aggressive way many no-excuse charters force out kids who look like bad fits (http://observationalepidemiology.blogspot.com/2015/02/repost-selection-on-spinach.html), lots of these kids will have very short tenures. This probably means that a lot of students who would have been low-score/high-charter-years had they stayed where they were assigned by the lottery have been shifted to the the low-score/low-charter-year category. Isn’t this, in effect, charging them to public schools?