Last week, we discussed the implications of Bayes Theorem on estimating the actual prevalence. Specifically (foreshadowing with a bad pun here), the specificity of a test measures the probability that someone who tests seropositive wh is not seropositive is really important. As some asshole with a blog put it:
Let’s walk through an example. Suppose I test 2,000 people, and being the All-Knowing Mad Biologist, I know that twenty of those people really are seropositive. Now, to keep things simple, suppose the sensitivity of the test is 100% (everyone who is seropositive is correctly identified) and the specificity of the test is 95% (five out of 100 people who are seronegative incorrectly test as seropositive). The specificity is especially important because we don’t want to accidentally claim someone is seropositive (and thus could be safe from COVID-19) when they’re not.
In my example, there are twenty real positives. But there are also five percent of the remaining 1,980 seronegative people who falsely test positive: 99 false positives. That means only 20 out 119 (~17% or roughly one out of six) are actually protected from COVID-19. That’s not good.
To be clear, this isn’t some stunning HOT TAKE the Mad Biologist has had–Bayes developed this argument in 1763.
I bring this up because there appears to be a cottage industry of doctors who conduct serological COVID-19 tests–and too many credulous reporters. For example, here’s a case from D.C. (boldface mine):
Eileen West, the doctor who tested Hughes, is using a test awaiting FDA approval, manufactured by Hangzhou Biotest Biotech and distributed by Premier Biotech. The same test was used in a Stanford University trial published last week, West said, adding that it is considered to be “well over 90 percent accurate.”
West said she is about a third of the way through administering 360 tests as part of an initiative by Ms. Medicine, a women’s health-care network that is also offering 1,200 drive-through tests in Cincinnati.
Of 110 she has administered, five have been positive, she said, adding that two of the patients with antibodies had had household contact with confirmed coronavirus cases and one had been exposed to someone in the workplace who had it. Among them, only Hughes reported feeling ill as early as February, she said.
West said the Fairfax County Health Department told her it did not have a system in place for reporting antibody test results, and she had not yet tried the D.C. health department. “The antibody information is so new it isn’t clear they desire that information as yet,” she said.
The rate of positive tests in her study closely matches the 4 percent rate found in antibody tests in Santa Clara, Calif., West said, adding that serology information could help shed light on other potential early undiagnosed cases.
A recent, not-yet-peer reviewed studied compared multiple tests, including the Premier test, and, depending on how one assesses the data*, it appears to have a specificity of 98% percent and a sensitivity of about the same (98%, meaning 98% of seropositive people actually test positive, and three percent don’t). What that means is the observed four percent seropositive rate probably means that there is an actual seropositive rate around two percent in D.C., which seems more likely, since about 0.6% of D.C. residents have tested positive, though that’s likely an undercount for various reasons.
The point is, if news organizations are going to report on these numbers, they have to put them into the proper context. Even highly accurate tests, when looking for relatively rare events, will yield a lot of false positives, and readers need to be made aware of that.
*97%/97% gets you an actual specificity of around 1%, while 99/99 gets you around 3%, which seems to be a reasonable interval.