"The tests used to assess Covid, particularly those using Polymerase Chain Reaction (PCR), are not specific enough, and produce errors around 1% of the time. If the true prevalence of the virus is low, that error rate means we'd end up seeing lots of 'positive cases' that aren't actually Covid infections. Maybe even 91% of the 'cases' we think we've found are not actually false positives".

"This isn't a pandemic - it's a 'Casedemic', where there are many so-called cases, but not much actual disease."

  1. The false-positive logic is valid - but it is irrelevant. It's true to say that, with low infection numbers in the community and even a high-seeming test specificity (e.g. 99%), we'd get a lot of "cases" that are really false positives. Before we explain why it's irrelevant, we'll lay out the logic that the Sceptics are using.

There's nothing wrong with this reasoning per se. But, as we'll see below, the numbers that are plugged in bear little relation to reality, making this point entirely irrelevant.

  1. Virus rates aren't low, breaking one of the assumptions of this idea. The above logic rests on only a small percentage of people having the virus (the first bullet point). But that's no longer realistic. At the time of writing, around 2% (1 in 50) are currently testing positive. If we use that number in the scenario described above (you may think this is somewhat circular, but the point is to show how changes in the background rate of infection can dramatically affect the final false positive rate), there are now 200 true positives and about 100 false positives. That reduces the final rate of false positives to 50%. That's still high - but as we'll see in point (3), another of the assumptions above is wrong as well.

  2. The specificity of the test is very high, breaking another of the assumptions. Above, we supposed a test that was 99% specific (the second bullet point). In other words, it would have a 1% error rate where it incorrectly told Covid-negative people they were Covid-positive. But we have good reason to believe that this is far from the real specificity of the test. A paper published in November from Wuhan in China reported nearly 1,000,000 tests, but found only 300 cases. If every single one of these cases was a false positive (and there's no reason to think that's the case - but just hypothetically), the specificity of the test would be 99.97% (there's a similar story in New South Wales, which regularly tests hundreds of thousands of people and finds case numbers in the double-figures). To put this another way: the results from these places—as well as from the summer in the UK, where the large-scale ONS survey found rates of around 0.05% positive results—show it's impossible for the specificity of the test to be as low as 99.0%, or anywhere near it.

Using a specificity of 99.97%, along with a true rate of infection of 2%, reduces our final rate of false positives in the scenario above to 3%. Since the specificity of the test is likely to be even higher than that, the final false positive rate is likely to be even lower. The vast, vast majority of cases here would be true Covid infections.

  1. There are other reasons to think test specificity is high. As the Office for National Statistics points out:

"We know the specificity of our test must be very close to 100% as the low number of positive tests in our study means that specificity would be very high even if all positives were false. For example, in the most recent six-week period (31 July to 10 September), 159 of the 208,730 total samples tested positive. Even if all these positives were false, specificity would still be 99.92%.

"We know that the virus is still circulating, so it is extremely unlikely that all these positives are false. However, it is important to consider whether many of the small number of positive tests we do have might be false. There are a couple of main reasons we do not think that is the case.

"Symptoms are an indication that someone has the virus; therefore, if there are many false-positives, we would expect to see more false-positives occurring among those not reporting symptoms. If that were the case, then risk factors such as working in health care would be more strongly associated with symptomatic infections than with asymptomatic infections. However, in our data the risk factors for testing positive are equally strong for both symptomatic and asymptomatic infections."

  1. The "casedemic" idea can't explain the changes in Covid rates over the months. Unless the quality of the testing changes dramatically, doing more testing should produce more false-positive cases. But not anywhere near as many as we've seen during this second wave of Covid. This can easily be seen by contrasting the testing numbers with cases on the Office for National Statistics website. Testing has very steadily increased over time. Positive cases, on the other hand, were measured in three-figure numbers per day during the summer, then rose to over 30,000 per day in November before falling slightly, then exploding in late December with a peak of over 80,000 cases per day.

The number of tests conducted (which just continuously rises over time) follows nowhere near the same pattern as cases. Since there's no reason to think the specificity of the tests mysteriously increased and decreased over time, it's far more realistic to think that the vast majority of the cases we observe in the second wave are true positives.