COVID-19 testing – False Positives: Misleading Results or False Narratives?
COVID-19 Actuaries Response Group – Learn. Share. Educate. Influence.
COVID-19 case numbers in the UK have been increasing significantly in recent weeks. There has been a lot of discussion of whether this increase reflects a real higher rate of infection or a large number of “false positives” (people who are not actually infected testing positive for COVID-19).
Whilst some positive tests will be false positives (and equally some negative tests will be false negatives), based on our understanding of the sensitivity and specificity of COVID-19 tests, we believe that it is highly likely that the vast majority of positive tests represent true positives, and that the testing results fairly represent the level of infectivity in the tested population.
Sensitivity and specificity
When someone takes a COVID-19 test, they are either infected or not, and they will receive either a positive or a negative result.
If the test were perfect, every individual who received a test and was infected would get a positive test result, whereas every individual who received a test and was not infected would get a negative result. In this case, all of the positive and negative results would be “true” negatives or positives.
In the real world, of course, tests are not perfect and not everyone will receive the correct test result. It is helpful to know how likely it is that a test will either incorrectly identify or fail to identify a disease.
A test’s ability to correctly detect people with the condition is known as the test’s “sensitivity” – that is, the probability of a positive test, given that the person has the disease. Conversely, the test’s ability to correctly detect those who do not have the disease is known as its “specificity” – the probability of a negative test, given that the person does not have the disease.
The outcomes of a test, depending on whether someone is infected or not, and whether they test positive or not, can be placed in a simple 2×2 table:
|Test positive||Test negative|
|Infected||True positive (TP)||False negative (FN)|
|Not infected||False positive (FP)||True negative (TN)|
The sensitivity of the test is simply the number of true positives divided by the total number infected, or (TP / (TP + FN))
The specificity is the number of true negatives divided by the total number not infected, or
(TN / (FP + TN))
One point to note is that the specificity of high-volume COVID-19 tests is known to be very high – because a low proportion of tests are positive, even if all those positives were false, the test would still be giving true negatives in the large majority of cases. The sensitivity of tests is not known with as much certainty.
The most common form of testing in the UK currently is polymerase chain reaction (PCR) testing – this is a swab test which determines whether the SARS-CoV-2 virus is present in the sample. The ONS estimates that the sensitivity of the PCR tests is between 85% and 98%, and the specificity is at least 99.92% (link). The sensitivity and specificity of other tests will vary.
What is the prevalence of COVID-19?
The other key factor when considering the relative importance of false positives and negatives is the prevalence of COVID-19, in the wider population and the tested population. All else being equal, if the true positive rate is lower, the proportion of false positives will be higher.
Estimates of the current prevalence of COVID-19 in the population could be seen as being slightly circular because they are based on the proportion of positive tests, some of which may be false positives. However, we can illustrate some scenarios to shed light on the official estimates.
The ONS Infection Survey (link) provides estimates of the overall prevalence of COVID-19 in the population, based on the results of swab tests on a subset of individuals. At the end of July, the ONS estimated that, if selected at random, around 1 in every 2,000 people (0.05%) would have tested positive. This estimate rose to 1 in 500 (0.21%) in mid‑September and 1 in 150 (0.62%) at the start of October; for simplicity we’ll use a figure of 0.6% in our worked examples.
Prevalence in the tested population
Since 1 August (data is not provided on a consistent basis before then), the proportion of tests returning positive results has been regularly between 8 and 12 times higher than the ONS estimate, with the lowest point being around 0.5% at the start of August (link). The latest proportions testing positive are consistently above 5%, according to the latest PHE surveillance report (link).
It is possible for PCR tests to return positive results after an individual has ceased to be actively infected. However, because the lowest positive figure of 0.5% from August represents all positives (true and false, actively infected or not) and came after the peak of the pandemic (so there were relatively large numbers of individuals who had already been infected), this indicates that the proportion of PCR tests returning positive results for people who are no longer infected is likely to be very low.
As noted above, the prevalence in the tested population is much higher than in the wider population (which is intuitively what we would expect, with people only having a test if they have symptoms or otherwise perceive themselves to be at risk). We can estimate the number of true and false positives and negatives in a few different scenarios – this indicates that, unless assumptions are made about the testing regime that are highly unlikely to be true, the level of prevalence in the tested population is likely to be consistent with that seen in the test results.
Scenario 1: best estimate view
Our first scenario is:
- 6% of the overall population are currently infected (as per ONS estimate)
- 5% of the tested population are infected
- 300,000 tests with
- Specificity 99.9%
- Sensitivity 90%
The specificity and sensitivities are based on ONS estimates given above; given that the 99.92% estimate for ONS was the absolute minimum, the true specificity of the tests could be slightly higher, but we have made some allowance for the fact that community testing may have lower specificity given that not all testers may be as experienced as those used in Infection Survey testing, and because multiple tests are sometimes carried out in the Infection Survey to confirm positive tests.
If we tested 300,000 people at random from the population, based on a 0.6% infection rate, we might expect 1,800 to have COVID-19. But people who receive tests are either those who are at high risk of having been infected (for example, due to their occupation), or those who are exhibiting symptoms that could be COVID-19. So it is quite possible that the proportion of the tested population infected is much higher than the background rate.
This scenario broadly represents our best estimate view of the current situation.
If 5% of the tested population are infected (just over 8 times higher than the background level), of 300,000 tests we’d expect to find 13,500 of the 15,000 infected (90% x 13,500) with 1,500 false negatives.
Of the 285,000 not infected, we’d expect to see 285 false positives ((1-99.9%) x 285,000).
Filling in the table, we get:
|Scenario 1: Best estimate||Test negative||Test positive||Total|
In this scenario, false negatives are potentially a larger issue than false positives – only 2% of positives are false, and there are 5 times as many false negatives. The true level of infection in the tested population is therefore higher than the tests suggest.
Scenario 2: the sensitivity and specificity of the population-level test is lower than the 99.9%+ suggested by ONS
It is possible the sensitivity and specificity of the population-level tests are lower than the ONS figures. However, as noted above, there were days in August where only 0.5% of tests came back positive – even if all of these were false positives, this suggests that the specificity of community testing must be at least 99.5%.
Using a specificity of 99.5% and sensitivity of 85%, can we construct a scenario whereby the infection level in the tested population is much lower than that implied by the results? In fact, we still need to model a true infection level of close to 5% in the tested population to reach 5% of tests coming back positive.
|Scenario 2: Reduced specificity and sensitivity||Test negative||Test positive||Total|
In this scenario, around 10% of positives are false – but there are still more false negatives than false positives, so the true picture is not materially mis-stated (as before, infections are actually understated if anything).
By using a specificity of 99.5%, this scenario implicitly assumes that either all positive tests in August were false, or that the specificity of testing has got much worse since August. We believe this to be unlikely. For this reason, this scenario is likely to be more pessimistic than a plausible worst-case scenario for false positives, in the sense that it is likely that the true false positive rate is much closer to scenario 1 than scenario 2.
Scenario 3: extreme false positives
There has been a persistent narrative in recent weeks that 90% of positive tests could be false, and that the true prevalence in the tested population could actually be in line with the wider population rather than being much higher. In this scenario, we assume that the true positivity rate in the tested population is only 0.6%, and stretch the sensitivity and specificity rates to fit the narrative – to do this, we use specificity of 95% and sensitivity of 85%.
Note that this scenario is produced to indicate the extent to which the assumptions need to be changed to reach the number of positive tests currently being seen in the population, rather than to suggest that this is in any sense realistic, or even plausible.
In particular, as noted in the previous scenarios, anyone suggesting that the true specificity of these tests is as low as 95% will need to explain how this squares with the fact that only 0.5% of tests came back positive at times in the past.
|Scenario 3: extreme false positive||Test negative||Test positive||Total|
This scenario gives a false positive percentage of 91%. In our view, to reach this sort of level of false positive results, a large number of factors have to be significantly wrong in the testing regime, both in the community and in the ONS tests.
Would false positives explain the growth in numbers?
A scenario where the testing regime is inherently so flawed as to produce this level of false positive tests also does not explain why the proportion of positive tests is increasing rapidly. If 90+% of tests are false positives with a very low background rate of infection, then the proportion of positive tests should remain constant (if true infection rates are very low, the proportion of negative tests would be broadly equal to the specificity, and the proportion of positive tests would be the remainder).
It should therefore not be possible for the proportion of positive tests to rise quickly, unless the testing regime itself has been worsening significantly in recent weeks. A more reasonable conclusion is that the true rate of infection is rising, consistent with the weekly ONS surveillance reporting and Imperial College REACT surveys.
COVID-19 tests are not perfect, which means that there will be both false positive and false negative results. False positives are problematic for individuals given a positive test result, as they will be required to isolate unnecessarily. Similarly, false negatives are problematic when trying to slow the spread of the disease, as infected individuals will not be identified and so may continue to infect others.
Our analysis has focused on false positives. We conclude that the level is low, consistent with the ONS upper estimate, and thus that increasing numbers of positive tests are a true reflection of the levels of population infectivity, as confirmed by random population surveillance.
22nd October 2020