Recently, there have been a slew of TV reports, tweets, and the like with statistics like “[some really high percentage] of hospitalized patients are unvaccinated” which are arguing that vaccination is a very good thing. To preempt any antiva assholes–seriously, just go fuck yourselves–the vaccines are really effective, especially at stopping serious disease. But I haven’t liked those ‘popular’ comparisons because they don’t account for time and they conflate geography.
The time issue can be illustrated using, as an example… me. I became fully vaccinated May 20. If we assume that I was at-risk for COVID-19 infection beginning March 1, 2020, I had nearly fifteen months of unvaccinated exposure, while I’ve only had a bit under three months of vaccinated exposure. Clearly, these two lengths of time are not equal. If the vaccine had no effect, then the odds are that I would have been infected during my pre-vaccine phase solely because it’s a much longer period of time. Moreover, the risk of exposure has varied: it was relatively low during June 2020, but much higher during December 2020. So not accounting for various time issues potentially runs into the ‘apples versus oranges’ problem.
Lumping together different lcoations also screws things up (“screws things up” is a highly technical statistical term). Vermont and South Dakota simply have had different COVID-19 risks throughout the pandemic. And, of course, those risks often haven’t been proportional through time either: sometimes the Vermont-South Dakota gap has been relatively small, and sometimes it has been very BIGLY. So, again, while it’s obvious vaccination decreases the risk of hospitalization (and death), many of these ‘common sense’ statistics aren’t really good estimates.
But a recent manuscript does describe these risks very well and gets around some of these problems (pdf). By its own admission, it’s a descriptive paper, but that’s what we need. The important methodological tool is this (boldface mine):
[T]he risk ratios shown were calculated as ratios of incidence rates per 100,000 person-weeks using person-time calculated from the daily counts of people fully vaccinated from reference 5, not the cumulative Fully Vaccinated (N) total at the end of the data period…
The paper looks at twelve states from January through July 2021, and calculates the ‘risk ratio’, which is the risk for unvaccinated versus fully vaccinated people for each state during that time period; they also estimate a vaccine efficacy for each state. Here’s the key table (p. 9 in the pdf):
Computer, enhance!
You can see two things:
- The vaccines are really good at preventing bad outcomes. Even in Arkansas!
- The efficacy of vaccination has a very large state-to-state variability.
I have no idea why state-to-state differences exist (I could bullshit up some reasons, but life’s too short).
But, anyway, looking at geographic regions and using “person-weeks” is a good way to do this sort of description.