A long thread about test positivity: Tracking positivity is important. Unlike number of tests, positivity is linked to the number of infections out there. It can help give us a sense of whether a state or country is doing enough testing for the size of its epidemic.
Positivity can be a measure of whether we are doing enough tests. As a standalone metric, it is NOT a measure of prevalence or incidence of infection.
Increasing positivity can be a sign that disease incidence is increasing. If the number of cases reported per day is increasing AND positivity is increasing, it tells us that infections are likely increasing faster than number of tests being performed.
But positivity is not a perfect metric. You can have a low positivity if you test all the wrong people. You can lower your positivity by adding repeat tests that don’t yield additional public health insights.
Ideally, to track positivity, we'd be looking at the % of people tested on a given day or week that test positive. We care about PEOPLE more than tests because we are trying to answer the question: "Are we doing enough testing to find the infections that are out there?"
If we are trying to figure out if we are doing enough testing to find the infections that are occurring, here is what we DON’T CARE about: Antibody tests, additional tests on people who have already tested positive, multiple tests performed on the same person at the same time.
To determine if we’re casting a wide enough net to find infections, we do not care about antibody tests because they are not supposed to be used to diagnose infection. Antibody tests should NOT be included in positivity calculations.
Positivity will be most meaningful if we use number of people tested not the number of tests.
Many states are unable to track numbers of individual people tested. This is understandable, but not ideal. Some states are unable to separate viral tests and antibody tests. This is not good.
Since states often can’t identify in their lab information systems the number of people tested, they look at number of tests performed. This is not wrong, but not ideal because people may be tested multiple times in single day.
When we started the @JohnsHopkins COVID Testing Insights Initiative, states were not tracking positivity. Many were not regularly reporting positive or negative tests. https://coronavirus.jhu.edu/testing
For @JohnsHopkins COVID Testing Insights Initiative we calculate positivity as cases/(cases + negative tests). We report 7 day running averages. We started this way because that’s all that used to be reported. Some states report # positive tests (not just cases) others don't.
We source our data from our colleagues @COVID19Tracking project. They have been enormously important in tracking not just data but also data quality and variation among states. Their efforts have led to greater access to data and transparency.
As common in public health surveillance, data on tests are imperfect. Each state reports data differently. This variability can make it hard to track with precision and makes it difficult to directly compare states.
Our positivity calculations are different from states that look at # positive tests/# total tests. Eg, our approach may not include people who are tested again after they are identified as a case. https://coronavirus.jhu.edu/testing
We’ve heard from several states, concerns about discrepancies in their calculation vs ours. In nearly all instances, the difference is small and insignificant from a public health perspective.
The 5% positivity benchmark is not gospel. It’s a guide. Is 4% better than 6%. Maybe? We don’t have clear data. Lower is likely better, but only if testing is well-distributed and timely.
Many states are using positivity as a benchmark for reopening or for implementing control measures, such as interstate travel restrictions. This may overstate what positivity is telling us.
Positivity should not be used on its own as the basis for high consequence decisions. It should be used to help interpret the strength of states’ reported case numbers and case finding efforts.
But again, positivity does not tell us how much infection is occurring in a state. Positivity can be misleading, if the wrong people are tested.
Right now, the biggest threat facing states is long delays in getting test results due to demand for testing exceeding available capacity. Test turnaround time is possibly now the most important metric that states aren’t reporting.
It doesn’t matter how many tests a state does or what its positivity is if test results come back too late to act upon them.
If states are unable to use tests to identify and isolate infection and trace and quarantine contacts of cases, testing is useless. Infections, case numbers and, ultimately, positivity, will eventually increase.
So, in summary: positivity is an important, but imperfect metric. It can inform our interpretation of case numbers and adequacy of testing efforts. But no single measure can tell us everything. Test positivity is not an end.