Search

Footballers, COVID-19 and the pitfalls of medical testing.

Updated: Jun 23, 2020

There was an interesting COVID-19 related article recently, some of you may have seen it:

Norwich footballer who tested positive for Covid-19 after playing vs Tottenham now ‘confusingly’ tests negative

https://www.101greatgoals.com/news/norwich-footballer-who-tested-positive-for-covid-19-after-playing-vs-tottenham-now-confusingly-tests-negative/


A friend, knowing my background as a decision analyst (who has more than a passing interest in the reliability of the information), asked my opinion on the reliability of COVID-19 diagnostic testing. In short, I do not really have an answer. I do not have an answer because I do not have all the information. However, I do have some insights on the problem of diagnostic testing and some of the problems of data reliability and misrepresentation of risk due to incorrect assumptions about diagnostic testing.


I first came across the problem of the reliability of medical testing in Dr Ben Goldacre's book 'Bad Science'. In his book, Ben describes the peculiar case of an individual who was tested positive for HIV and for many years believed they had HIV. This was in the early days of the HIV epidemic and there was rampant discrimination against people with HIV and there were none of the therapeutic treatments that exist today. The individual lost everything, house, job, friends and believed they were living under a death sentence. Then, years later the individual tested negative for HIV. They did not have HIV after all. How could this possibly happen?


It turns out that medical tests are not necessarily one hundred percent reliable. There are three important data points that are required in order to calculate the chance of misdiagnosis from a single medical test:

  1. The rate of occurrence of the disease in the population.

  2. The sensitivity of the medical test.

  3. The specificity of the medical test.

The rate of occurrence is the prevalence of the disease in the general population. For a rare disease, the chance of you having the disease might be one in a million, for a common disease it might be one in ten. It seems obvious that the more common the disease, the more likely it is that you have the disease if you are tested positive. Mathematically this is true.


Sensitivity and Specificity, on the other hand, are a little harder to understand.


Sensitivity refers to the test's ability to correctly detect ill patients who do have the condition. Specificity relates to the test's ability to correctly reject healthy patients without a condition.


Sensitivity is calculated as the number of true positive divided by the total of the number of true positives and the number of false negatives:



Specificity is calculated the number of true negatives divided by the total of the number of true negatives and the number of false positives:



Let us take the hypothetical case of a clinical trial of the accuracy of a test for COVID-19. In this trial of 2000 people, it was known beforehand that 1000 had COVID-19 and 1000 did not have COVID-19. Collecting data from a new rapid turn-round clinical test, it was found that 200 of those that had COVID-19 tested negative, and 250 of those that did not have COVID-19 tested positive. Based on these results, the calculated sensitivity is 80% and the calculated specificity is 75%.



So what is the impact of these calculated statistics on the chance that you do actually have COVID-19 if your test for the disease is positive. Well, this depends on the rate of the occurrence of the disease in a given population. Again hypothetically speaking, let us assume that in a population of ten thousand people, 6000 have COVID-19 (60%). Assuming 60% of the population have COVID-19 and a test sensitivity of 80% and test specificity of 75%, if a footballer is tested positive, what is the chance the footballer actually has COVID-19. We can calculate this probability using some math based on Bayes Theorem.


https://en.wikipedia.org/wiki/Bayes%27_theorem


But math is scary, so we will instead use a diagram that makes revised probabilities much easier to understand (Ben Goldacre produced a similar diagram in his book 'Bad Science'):


In a population of ten thousand, six thousand have COVID-9. Of those 6000 people, based on a sensitivity of 80%, 4800 return a positive test and 1200 return a negative test. The remaining 4000 people do not have COVID-19. Out of these 4000 people, based on a specificity of 75%, 3000 return a negative test and 1000 return a positive test.


To calculate the chance of having COVID-19 given a positive test we simply divide out the number of people with COVID-19 and a positive test by the total number of positive tests, remembering that 1000 people who do not have COVID-19 tested positive. So in this case 4800 divided by (4800 plus 1000) gives 83% (rounded up to nearest one percent).


So the answer to this hypothetical case, is that if a footballer tests positive then there is a 17% chance he does not have COVID-19.


Depending on the statistics you can produce some wholly counter-intuitive results. Let us take the example of a rare disease and a highly reliable test. The rate of occurrence of the disease is one person on ten thousand. The test is 99.99% accurate. We will assume the rate of false negatives and false positive is the same. Sensitivity is 99.99% and specificity is 99.99%.


If you test positive, what the chance you have the disease.


Intuitively, with a test that is 99.99% accurate, highly likely.


Wrong, in fact if you test positive then you have a fifty percent (50%) chance of having the disease.


This because in a population of ten thousand there will be two positive tests, one person who has the disease and one person who does not have the disease (1/(1+1) = 50%), Conversely, if you test negative, you have almost no chance of having the disease (a statistically insignificant chance).


We can produce another diagram as an explanation of this counter-intuitive result:




So back to the question asked by my friend. What is my opinion on the footballer who tested positive for COVID-19 and then tested negative? I do not have an opinion, as I do not know all the facts. I do not know the true rate of occurrence in the population and I am not privy to the sensitivity and specificity of the test used. However, it is reasonable to assume, that based on a single medical test with no other evidence, there is a probability that some people will return a positive test when they do not infact have COVID-19.


More worryingly, it might be that some people with COVID-19 will return a negative test and go about their lives believing they do not have COVID-19. But unless we know the diagnostic reliability of the tests, it is impossible to know what the risk is that COVID-19 is being spread due to a number of people walking around with false-negative tests.


Also, it is worth saying that medical diagnosis is much more complicated than I explain here. Clinicians usually rely on differential diagnosis which relies on multiple strands of evidence. A good overview is here:


https://en.wikipedia.org/wiki/Differential_diagnosis


You can download my calculator used to create these examples here:


https://1drv.ms/x/s!Ag5XYEKLiw31-H9EAvfNRV77MZEt?e=3zJJas




69 views0 comments

Recent Posts

See All